From newsgrps at dalyn.co.nz Sun Jun 1 18:39:18 2008 From: newsgrps at dalyn.co.nz (David Emerson) Date: Mon, 02 Jun 2008 11:39:18 +1200 Subject: [dba-SQLServer] Deleting Global Temporary Tables Message-ID: <20080601233814.HGBA4822.mta06.xtra.co.nz@Dalyn.dalyn.co.nz> Group, I have this code in a sproc: IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = object_id(N'dbo.##tblCAFRClientAsset') AND OBJECTPROPERTY(id, N'IsUserTable') = 0) DROP TABLE dbo.##tblCAFRClientAsset CREATE TABLE dbo.##tblCAFRClientAsset ( [ClientAssetID] [numeric](10, 0) IDENTITY (1, 1) NOT NULL , [AssetType] [varchar] (50) COLLATE Latin1_General_CI_AS NULL , [Owner] [varchar] (10) COLLATE Latin1_General_CI_AS NULL , [Description] [varchar] (50) COLLATE Latin1_General_CI_AS NULL , [Description2] [varchar] (50) COLLATE Latin1_General_CI_AS NULL , [AccountNo] [varchar] (20) COLLATE Latin1_General_CI_AS NULL , [MarketValue] [money] NULL , [Contributions] [money] NULL , [ContributionsImp] [money] NULL , [InterestRate] [real] NULL , [PropertyType] [varchar] (50) COLLATE Latin1_General_CI_AS NULL , [PropertyExpenses] [money] NULL , [InvestmentType] [varchar] (50) COLLATE Latin1_General_CI_AS NULL , [InsuranceType] [varchar] (50) COLLATE Latin1_General_CI_AS NULL , [InsCover] [money] NULL , [InsCoverImp] [money] NULL , [InsPolicyTerm] [smallint] NULL , [Comment] [varchar] (250) COLLATE Latin1_General_CI_AS NULL , [SortOrder] [smallint] NULL ) I am getting an error message when I run it "There is already an object named '##tblCAFRClientAsset' in the database.". This is on the line trying to create the table. It seems that the table is not being dropped. When I try to run just the select portion of the first line nothing is returned. How do I identify if the temporary table exists so I can drop it? Regards David Emerson Dalyn Software Ltd Wellington, New Zealand From stuart at lexacorp.com.pg Sun Jun 1 20:21:13 2008 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 02 Jun 2008 11:21:13 +1000 Subject: [dba-SQLServer] Deleting Global Temporary Tables In-Reply-To: <20080601233814.HGBA4822.mta06.xtra.co.nz@Dalyn.dalyn.co.nz> References: <20080601233814.HGBA4822.mta06.xtra.co.nz@Dalyn.dalyn.co.nz> Message-ID: <4843D7A9.28704.46C7B0DE@stuart.lexacorp.com.pg> Try: If Object_Id('tempdb..##tblCAFRClientAsset') is Not Null DROP TABLE dbo.##tblCAFRClientAsset -- Stuart On 2 Jun 2008 at 11:39, David Emerson wrote: > Group, > > I have this code in a sproc: > > IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = > object_id(N'dbo.##tblCAFRClientAsset') AND OBJECTPROPERTY(id, > N'IsUserTable') = 0) > DROP TABLE dbo.##tblCAFRClientAsset > > CREATE TABLE dbo.##tblCAFRClientAsset ( > [ClientAssetID] [numeric](10, 0) IDENTITY (1, 1) NOT NULL , > [AssetType] [varchar] (50) COLLATE Latin1_General_CI_AS NULL , > [Owner] [varchar] (10) COLLATE Latin1_General_CI_AS NULL , > [Description] [varchar] (50) COLLATE > Latin1_General_CI_AS NULL , > [Description2] [varchar] (50) COLLATE > Latin1_General_CI_AS NULL , > [AccountNo] [varchar] (20) COLLATE Latin1_General_CI_AS NULL , > [MarketValue] [money] NULL , > [Contributions] [money] NULL , > [ContributionsImp] [money] NULL , > [InterestRate] [real] NULL , > [PropertyType] [varchar] (50) COLLATE > Latin1_General_CI_AS NULL , > [PropertyExpenses] [money] NULL , > [InvestmentType] [varchar] (50) COLLATE > Latin1_General_CI_AS NULL , > [InsuranceType] [varchar] (50) COLLATE > Latin1_General_CI_AS NULL , > [InsCover] [money] NULL , > [InsCoverImp] [money] NULL , > [InsPolicyTerm] [smallint] NULL , > [Comment] [varchar] (250) COLLATE Latin1_General_CI_AS NULL , > [SortOrder] [smallint] NULL > ) > > I am getting an error message when I run it "There is already an > object named '##tblCAFRClientAsset' in the database.". This is on > the line trying to create the table. It seems that the table is not > being dropped. When I try to run just the select portion of the > first line nothing is returned. How do I identify if the temporary > table exists so I can drop it? > > Regards > > David Emerson > Dalyn Software Ltd > Wellington, New Zealand > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From newsgrps at dalyn.co.nz Sun Jun 1 21:08:31 2008 From: newsgrps at dalyn.co.nz (David Emerson) Date: Mon, 02 Jun 2008 14:08:31 +1200 Subject: [dba-SQLServer] Deleting Global Temporary Tables In-Reply-To: <4843D7A9.28704.46C7B0DE@stuart.lexacorp.com.pg> References: <20080601233814.HGBA4822.mta06.xtra.co.nz@Dalyn.dalyn.co.nz> <4843D7A9.28704.46C7B0DE@stuart.lexacorp.com.pg> Message-ID: <20080602020720.ARO4822.mta06.xtra.co.nz@Dalyn.dalyn.co.nz> Thanks Stuart, I hadn't realised that there was a special database for temporary tables. David. At 2/06/2008, Stuart wrote: >Try: > >If Object_Id('tempdb..##tblCAFRClientAsset') is Not Null >DROP TABLE dbo.##tblCAFRClientAsset > >-- >Stuart > >On 2 Jun 2008 at 11:39, David Emerson wrote: > > > Group, > > > > I have this code in a sproc: > > > > IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = > > object_id(N'dbo.##tblCAFRClientAsset') AND OBJECTPROPERTY(id, > > N'IsUserTable') = 0) > > DROP TABLE dbo.##tblCAFRClientAsset > > > > CREATE TABLE dbo.##tblCAFRClientAsset ( > > [ClientAssetID] [numeric](10, 0) IDENTITY (1, 1) > NOT NULL , > > [AssetType] [varchar] (50) COLLATE > Latin1_General_CI_AS NULL , > > [Owner] [varchar] (10) COLLATE Latin1_General_CI_AS NULL , > > [Description] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [Description2] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [AccountNo] [varchar] (20) COLLATE > Latin1_General_CI_AS NULL , > > [MarketValue] [money] NULL , > > [Contributions] [money] NULL , > > [ContributionsImp] [money] NULL , > > [InterestRate] [real] NULL , > > [PropertyType] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [PropertyExpenses] [money] NULL , > > [InvestmentType] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [InsuranceType] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [InsCover] [money] NULL , > > [InsCoverImp] [money] NULL , > > [InsPolicyTerm] [smallint] NULL , > > [Comment] [varchar] (250) COLLATE > Latin1_General_CI_AS NULL , > > [SortOrder] [smallint] NULL > > ) > > > > I am getting an error message when I run it "There is already an > > object named '##tblCAFRClientAsset' in the database.". This is on > > the line trying to create the table. It seems that the table is not > > being dropped. When I try to run just the select portion of the > > first line nothing is returned. How do I identify if the temporary > > table exists so I can drop it? > > > > Regards > > > > David Emerson > > Dalyn Software Ltd > > Wellington, New Zealand > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com From newsgrps at dalyn.co.nz Sun Jun 1 21:15:17 2008 From: newsgrps at dalyn.co.nz (David Emerson) Date: Mon, 02 Jun 2008 14:15:17 +1200 Subject: [dba-SQLServer] Deleting Global Temporary Tables Message-ID: <20080602021417.ZXLF16357.mta04.xtra.co.nz@Dalyn.dalyn.co.nz> With further testing I was able to get both methods to work: IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE id = object_id(N'tempdb.dbo.##tblCAFRClientAsset')) DROP TABLE dbo.##tblCAFRClientAsset If Object_Id('tempdb..##tblCAFRClientAsset') is Not Null DROP TABLE dbo.##tblCAFRClientAsset Does one method have any advantages over the other? David. At 2/06/2008, Stuart wrote: >Try: > >If Object_Id('tempdb..##tblCAFRClientAsset') is Not Null >DROP TABLE dbo.##tblCAFRClientAsset > >-- >Stuart > >On 2 Jun 2008 at 11:39, David Emerson wrote: > > > Group, > > > > I have this code in a sproc: > > > > IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = > > object_id(N'dbo.##tblCAFRClientAsset') AND OBJECTPROPERTY(id, > > N'IsUserTable') = 0) > > DROP TABLE dbo.##tblCAFRClientAsset > > > > CREATE TABLE dbo.##tblCAFRClientAsset ( > > [ClientAssetID] [numeric](10, 0) IDENTITY (1, 1) > NOT NULL , > > [AssetType] [varchar] (50) COLLATE > Latin1_General_CI_AS NULL , > > [Owner] [varchar] (10) COLLATE Latin1_General_CI_AS NULL , > > [Description] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [Description2] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [AccountNo] [varchar] (20) COLLATE > Latin1_General_CI_AS NULL , > > [MarketValue] [money] NULL , > > [Contributions] [money] NULL , > > [ContributionsImp] [money] NULL , > > [InterestRate] [real] NULL , > > [PropertyType] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [PropertyExpenses] [money] NULL , > > [InvestmentType] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [InsuranceType] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > [InsCover] [money] NULL , > > [InsCoverImp] [money] NULL , > > [InsPolicyTerm] [smallint] NULL , > > [Comment] [varchar] (250) COLLATE > Latin1_General_CI_AS NULL , > > [SortOrder] [smallint] NULL > > ) > > > > I am getting an error message when I run it "There is already an > > object named '##tblCAFRClientAsset' in the database.". This is on > > the line trying to create the table. It seems that the table is not > > being dropped. When I try to run just the select portion of the > > first line nothing is returned. How do I identify if the temporary > > table exists so I can drop it? > > > > Regards > > > > David Emerson > > Dalyn Software Ltd > > Wellington, New Zealand > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com From stuart at lexacorp.com.pg Sun Jun 1 21:51:04 2008 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 02 Jun 2008 12:51:04 +1000 Subject: [dba-SQLServer] Deleting Global Temporary Tables In-Reply-To: <20080602021417.ZXLF16357.mta04.xtra.co.nz@Dalyn.dalyn.co.nz> References: <20080602021417.ZXLF16357.mta04.xtra.co.nz@Dalyn.dalyn.co.nz> Message-ID: <4843ECB8.1088.4719F124@stuart.lexacorp.com.pg> I prefer the second because it's shorter :-) But try them both in Query Analyser and take a look at the execution plan costs! Version1: 0.00641 Version 2: 0.000001 The simple version is 6410 times more efficient!!!! -- Stuart On 2 Jun 2008 at 14:15, David Emerson wrote: > With further testing I was able to get both methods to work: > > IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE id = > object_id(N'tempdb.dbo.##tblCAFRClientAsset')) > DROP TABLE dbo.##tblCAFRClientAsset > > > If Object_Id('tempdb..##tblCAFRClientAsset') is Not Null > DROP TABLE dbo.##tblCAFRClientAsset > > Does one method have any advantages over the other? > > David. > > At 2/06/2008, Stuart wrote: > >Try: > > > >If Object_Id('tempdb..##tblCAFRClientAsset') is Not Null > >DROP TABLE dbo.##tblCAFRClientAsset > > > >-- > >Stuart > > > >On 2 Jun 2008 at 11:39, David Emerson wrote: > > > > > Group, > > > > > > I have this code in a sproc: > > > > > > IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = > > > object_id(N'dbo.##tblCAFRClientAsset') AND OBJECTPROPERTY(id, > > > N'IsUserTable') = 0) > > > DROP TABLE dbo.##tblCAFRClientAsset > > > > > > CREATE TABLE dbo.##tblCAFRClientAsset ( > > > [ClientAssetID] [numeric](10, 0) IDENTITY (1, 1) > > NOT NULL , > > > [AssetType] [varchar] (50) COLLATE > > Latin1_General_CI_AS NULL , > > > [Owner] [varchar] (10) COLLATE Latin1_General_CI_AS NULL , > > > [Description] [varchar] (50) COLLATE > > > Latin1_General_CI_AS NULL , > > > [Description2] [varchar] (50) COLLATE > > > Latin1_General_CI_AS NULL , > > > [AccountNo] [varchar] (20) COLLATE > > Latin1_General_CI_AS NULL , > > > [MarketValue] [money] NULL , > > > [Contributions] [money] NULL , > > > [ContributionsImp] [money] NULL , > > > [InterestRate] [real] NULL , > > > [PropertyType] [varchar] (50) COLLATE > > > Latin1_General_CI_AS NULL , > > > [PropertyExpenses] [money] NULL , > > > [InvestmentType] [varchar] (50) COLLATE > > > Latin1_General_CI_AS NULL , > > > [InsuranceType] [varchar] (50) COLLATE > > > Latin1_General_CI_AS NULL , > > > [InsCover] [money] NULL , > > > [InsCoverImp] [money] NULL , > > > [InsPolicyTerm] [smallint] NULL , > > > [Comment] [varchar] (250) COLLATE > > Latin1_General_CI_AS NULL , > > > [SortOrder] [smallint] NULL > > > ) > > > > > > I am getting an error message when I run it "There is already an > > > object named '##tblCAFRClientAsset' in the database.". This is on > > > the line trying to create the table. It seems that the table is not > > > being dropped. When I try to run just the select portion of the > > > first line nothing is returned. How do I identify if the temporary > > > table exists so I can drop it? > > > > > > Regards > > > > > > David Emerson > > > Dalyn Software Ltd > > > Wellington, New Zealand > > > _______________________________________________ > > > dba-SQLServer mailing list > > > dba-SQLServer at databaseadvisors.com > > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > > http://www.databaseadvisors.com > > > > > > > > >_______________________________________________ > >dba-SQLServer mailing list > >dba-SQLServer at databaseadvisors.com > >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > >http://www.databaseadvisors.com > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From newsgrps at dalyn.co.nz Sun Jun 1 22:16:17 2008 From: newsgrps at dalyn.co.nz (David Emerson) Date: Mon, 02 Jun 2008 15:16:17 +1200 Subject: [dba-SQLServer] Deleting Global Temporary Tables In-Reply-To: <4843ECB8.1088.4719F124@stuart.lexacorp.com.pg> References: <20080602021417.ZXLF16357.mta04.xtra.co.nz@Dalyn.dalyn.co.nz> <4843ECB8.1088.4719F124@stuart.lexacorp.com.pg> Message-ID: <20080602031500.MJCK8128.mta03.xtra.co.nz@Dalyn.dalyn.co.nz> Ah hah. And I know just what I can do with the extra 0.006409 seconds ( they don't call it instant coffee for nothing) At 2/06/2008, you wrote: >I prefer the second because it's shorter :-) > >But try them both in Query Analyser and take a look at the execution >plan costs! > >Version1: 0.00641 >Version 2: 0.000001 > >The simple version is 6410 times more efficient!!!! > >-- >Stuart > > >On 2 Jun 2008 at 14:15, David Emerson wrote: > > > With further testing I was able to get both methods to work: > > > > IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE id = > > object_id(N'tempdb.dbo.##tblCAFRClientAsset')) > > DROP TABLE dbo.##tblCAFRClientAsset > > > > > > If Object_Id('tempdb..##tblCAFRClientAsset') is Not Null > > DROP TABLE dbo.##tblCAFRClientAsset > > > > Does one method have any advantages over the other? > > > > David. > > > > At 2/06/2008, Stuart wrote: > > >Try: > > > > > >If Object_Id('tempdb..##tblCAFRClientAsset') is Not Null > > >DROP TABLE dbo.##tblCAFRClientAsset > > > > > >-- > > >Stuart > > > > > >On 2 Jun 2008 at 11:39, David Emerson wrote: > > > > > > > Group, > > > > > > > > I have this code in a sproc: > > > > > > > > IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = > > > > object_id(N'dbo.##tblCAFRClientAsset') AND OBJECTPROPERTY(id, > > > > N'IsUserTable') = 0) > > > > DROP TABLE dbo.##tblCAFRClientAsset > > > > > > > > CREATE TABLE dbo.##tblCAFRClientAsset ( > > > > [ClientAssetID] [numeric](10, 0) IDENTITY (1, 1) > > > NOT NULL , > > > > [AssetType] [varchar] (50) COLLATE > > > Latin1_General_CI_AS NULL , > > > > [Owner] [varchar] (10) COLLATE > Latin1_General_CI_AS NULL , > > > > [Description] [varchar] (50) COLLATE > > > > Latin1_General_CI_AS NULL , > > > > [Description2] [varchar] (50) COLLATE > > > > Latin1_General_CI_AS NULL , > > > > [AccountNo] [varchar] (20) COLLATE > > > Latin1_General_CI_AS NULL , > > > > [MarketValue] [money] NULL , > > > > [Contributions] [money] NULL , > > > > [ContributionsImp] [money] NULL , > > > > [InterestRate] [real] NULL , > > > > [PropertyType] [varchar] (50) COLLATE > > > > Latin1_General_CI_AS NULL , > > > > [PropertyExpenses] [money] NULL , > > > > [InvestmentType] [varchar] (50) COLLATE > > > > Latin1_General_CI_AS NULL , > > > > [InsuranceType] [varchar] (50) COLLATE > > > > Latin1_General_CI_AS NULL , > > > > [InsCover] [money] NULL , > > > > [InsCoverImp] [money] NULL , > > > > [InsPolicyTerm] [smallint] NULL , > > > > [Comment] [varchar] (250) COLLATE > > > Latin1_General_CI_AS NULL , > > > > [SortOrder] [smallint] NULL > > > > ) > > > > > > > > I am getting an error message when I run it "There is already an > > > > object named '##tblCAFRClientAsset' in the database.". This is on > > > > the line trying to create the table. It seems that the table is not > > > > being dropped. When I try to run just the select portion of the > > > > first line nothing is returned. How do I identify if the temporary > > > > table exists so I can drop it? > > > > > > > > Regards > > > > > > > > David Emerson > > > > Dalyn Software Ltd > > > > Wellington, New Zealand > > > > _______________________________________________ > > > > dba-SQLServer mailing list > > > > dba-SQLServer at databaseadvisors.com > > > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > > > http://www.databaseadvisors.com > > > > > > > > > > > > >_______________________________________________ > > >dba-SQLServer mailing list > > >dba-SQLServer at databaseadvisors.com > > >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > >http://www.databaseadvisors.com > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com From word_diva at hotmail.com Mon Jun 2 10:02:16 2008 From: word_diva at hotmail.com (Nancy Lytle) Date: Mon, 2 Jun 2008 10:02:16 -0500 Subject: [dba-SQLServer] Job in Bermuda In-Reply-To: <29f585dd0805301148n77b50202u5f91d5bbfcd27db5@mail.gmail.com> References: <29f585dd0805301148n77b50202u5f91d5bbfcd27db5@mail.gmail.com> Message-ID: Arthur, when you get back on line could you email me offline? I just got an email about a job in Bermuda for a SQL DBA, and thought you might have some tips for me about how it is getting a job, immigrating, etc to Bermuda. Thank you, Nancy Lytle word_diva at hotmail.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Friday, May 30, 2008 1:48 PM To: Access Developers discussion and problem solving; Discussion concerning MS SQL Server; Discussion of Hardware and Software issues Subject: [dba-SQLServer] Momentary Adieu Sorry for the Cross-post, but I wanted to ensure that I touch all my friends on these lists. On Sunday morning I am moving to Bermuda to take a job as a SQL developer. It's a dream job and the money is excellent and the people are all really smart and the dress-code is very casual (tee shirt and shorts and sandals), and there is neither snow nor taxes. Where's the down side? So, consider this official notice of the creation of the Bermuda chapter of our group. My email will remain the same, so I won't lose touch, but it may take me a few days to get the Internet etc. installed. But fair warning, I'll be back in your faces within a few days. And we could always plan an dbAdvisor's conference in Bermuda, although the hotel prices are rather steep. Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com No virus found in this incoming message. Checked by AVG. Version: 8.0.93 / Virus Database: 269.24.3/1472 - Release Date: 5/29/2008 7:27 AM No virus found in this outgoing message. Checked by AVG. Version: 8.0.93 / Virus Database: 269.24.4/1478 - Release Date: 6/2/2008 7:12 AM From jwcolby at colbyconsulting.com Tue Jun 3 15:47:10 2008 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 03 Jun 2008 16:47:10 -0400 Subject: [dba-SQLServer] Undo Query Message-ID: <4845ADCE.2050100@colbyconsulting.com> Just idle curiosity, does an "undo query" happen at the same rate as a "do query"? I started a query running to append records from a ninety million record table to a table that contains a subset of the fields. Basically I have a denormalized source table with name, name2, name3 etc fields. Each of these have fname, mname, gender, age etc. Since these are denormalized "family" records, there are fewer Name2 records than Name1, fewer still Name3 etc. I neglected to put in a "where name2 is not null" clause and 30 minutes into the second append I realized that. I canceled the query and it is still "undoing" the query. Which led me to wonder the relative efficiency of "doing" vs "undoing". No Indexes in place on the target table of course. Any ideas on the relative efficiencies? Is undoing an append much slower than the append? -- John W. Colby www.ColbyConsulting.com From fhtapia at gmail.com Tue Jun 3 19:13:14 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 3 Jun 2008 17:13:14 -0700 Subject: [dba-SQLServer] Undo Query In-Reply-To: <4845ADCE.2050100@colbyconsulting.com> References: <4845ADCE.2050100@colbyconsulting.com> Message-ID: The process that is occurring is that the system is "rolling back" the transaction as it existed in the transaction log. How quick depends on what disk the transaction log is on in order re-process it all. I have noticed in my experience that rolling back an action does not take as long as doing something, but it does depend on the number of records processed etc. as always ymmv -- Francisco On Tue, Jun 3, 2008 at 1:47 PM, jwcolby wrote: > Just idle curiosity, does an "undo query" happen at the same > rate as a "do query"? > > I started a query running to append records from a ninety > million record table to a table that contains a subset of > the fields. Basically I have a denormalized source table > with name, name2, name3 etc fields. Each of these have > fname, mname, gender, age etc. Since these are denormalized > "family" records, there are fewer Name2 records than Name1, > fewer still Name3 etc. > > I neglected to put in a "where name2 is not null" clause and > 30 minutes into the second append I realized that. I > canceled the query and it is still "undoing" the query. > Which led me to wonder the relative efficiency of "doing" vs > "undoing". > > No Indexes in place on the target table of course. > > Any ideas on the relative efficiencies? Is undoing an > append much slower than the append? > > -- > John W. Colby > www.ColbyConsulting.com > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From jwcolby at colbyconsulting.com Tue Jun 3 19:20:08 2008 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 03 Jun 2008 20:20:08 -0400 Subject: [dba-SQLServer] Undo Query In-Reply-To: References: <4845ADCE.2050100@colbyconsulting.com> Message-ID: <4845DFB8.7000006@colbyconsulting.com> Francisco, Thanks for that. It was just idle curiosity. I really need to stay busy so that I don't get struck with that idle curiosity stuff. I didn't time the initial query running nor the rollback, but it didn't seem to be markedly different one way or the other. John W. Colby www.ColbyConsulting.com Francisco Tapia wrote: > The process that is occurring is that the system is "rolling back" the > transaction as it existed in the transaction log. How quick depends on what > disk the transaction log is on in order re-process it all. I have noticed > in my experience that rolling back an action does not take as long as doing > something, but it does depend on the number of records processed etc. as > always ymmv > > -- > Francisco > > On Tue, Jun 3, 2008 at 1:47 PM, jwcolby wrote: > >> Just idle curiosity, does an "undo query" happen at the same >> rate as a "do query"? >> >> I started a query running to append records from a ninety >> million record table to a table that contains a subset of >> the fields. Basically I have a denormalized source table >> with name, name2, name3 etc fields. Each of these have >> fname, mname, gender, age etc. Since these are denormalized >> "family" records, there are fewer Name2 records than Name1, >> fewer still Name3 etc. >> >> I neglected to put in a "where name2 is not null" clause and >> 30 minutes into the second append I realized that. I >> canceled the query and it is still "undoing" the query. >> Which led me to wonder the relative efficiency of "doing" vs >> "undoing". >> >> No Indexes in place on the target table of course. >> >> Any ideas on the relative efficiencies? Is undoing an >> append much slower than the append? >> >> -- >> John W. Colby >> www.ColbyConsulting.com >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > > From jwcolby at colbyconsulting.com Wed Jun 4 11:05:43 2008 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 04 Jun 2008 12:05:43 -0400 Subject: [dba-SQLServer] Where do you put generic stored procedures Message-ID: <4846BD57.1000708@colbyconsulting.com> I have a half dozen databases, which I am creating standard field names for, and then creating parameterized stored procedures to allow me to do things like drop specific indexes, rebuild those indexes, update a set of hash fields etc. Where would I put these stored procedures. I understand some people do not think putting them in the Master database is a good idea, but they do not "belong" to any of the specific databases either. Do you create your own database and place them in there? Some other strategy? -- John W. Colby www.ColbyConsulting.com From Gustav at cactus.dk Wed Jun 4 11:23:34 2008 From: Gustav at cactus.dk (Gustav Brock) Date: Wed, 04 Jun 2008 18:23:34 +0200 Subject: [dba-SQLServer] Where do you put generic stored procedures Message-ID: Hi John Model? Or run your own: cccommon or ccsys - you get the idea. /gustav >>> jwcolby at colbyconsulting.com 04-06-2008 18:05 >>> I have a half dozen databases, which I am creating standard field names for, and then creating parameterized stored procedures to allow me to do things like drop specific indexes, rebuild those indexes, update a set of hash fields etc. Where would I put these stored procedures. I understand some people do not think putting them in the Master database is a good idea, but they do not "belong" to any of the specific databases either. Do you create your own database and place them in there? Some other strategy? -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Wed Jun 4 11:26:52 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 4 Jun 2008 13:26:52 -0300 Subject: [dba-SQLServer] Where do you put generic stored procedures In-Reply-To: <4846BD57.1000708@colbyconsulting.com> References: <4846BD57.1000708@colbyconsulting.com> Message-ID: <29f585dd0806040926q27abb216ud8f17a7728045264@mail.gmail.com> Like all other things in SQL Server, there are several approaches. One is to place them in Master, where they will always be found. This demands that you back up master every time you make a change. Far too people IME back up master, which is plain silly, but that seems to be how it often is in the wild. The down side to this approach is that if you supply the database to a client or similar, you'll need a script to generate the sprocs on the client machine. That's a not a large task, but I do recommend that you name the sprocs in a way that makes them easy to identify. While I'm on the naming topic, it's a best practice never to name a sproc with the prefix "sp". That is for MicroSoft. You might name yours "gp_*", which would isolate them alphabetically so you can easily grab them all. A second approach, which I like for rock-solid things and which avoids the problem above, is to place them in Model. Model is so named because it is the template from which all new databases are created. You'll still have to create a script to create them in existing databases, but all future ones will automatically contain them (and anything else you add to Model). I have even created a Model that is the template for your standard order-entry database, with the tables for Customer, Order, Order Details, Product, etc. already in there. I might have to modify a few columns, but most of the grunt work gets done automatically using this method. Of course, this method has a down side, too. Should you update one of your sprocs, you'll need to revisit the other databases and alter the sprocs in them. Choose your poison. hth, Arthur On Wed, Jun 4, 2008 at 1:05 PM, jwcolby wrote: > I have a half dozen databases, which I am creating standard > field names for, and then creating parameterized stored > procedures to allow me to do things like drop specific > indexes, rebuild those indexes, update a set of hash fields etc. > > Where would I put these stored procedures. I understand > some people do not think putting them in the Master database > is a good idea, but they do not "belong" to any of the > specific databases either. > > Do you create your own database and place them in there? > Some other strategy? > > -- > John W. Colby > www.ColbyConsulting.com > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Wed Jun 4 11:42:55 2008 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 04 Jun 2008 12:42:55 -0400 Subject: [dba-SQLServer] Where do you put generic stored procedures In-Reply-To: References: Message-ID: <4846C60F.7060909@colbyconsulting.com> Thanks Gustav. John W. Colby www.ColbyConsulting.com Gustav Brock wrote: > Hi John > > Model? > > Or run your own: cccommon or ccsys - you get the idea. > > /gustav > >>>> jwcolby at colbyconsulting.com 04-06-2008 18:05 >>> > I have a half dozen databases, which I am creating standard > field names for, and then creating parameterized stored > procedures to allow me to do things like drop specific > indexes, rebuild those indexes, update a set of hash fields etc. > > Where would I put these stored procedures. I understand > some people do not think putting them in the Master database > is a good idea, but they do not "belong" to any of the > specific databases either. > > Do you create your own database and place them in there? > Some other strategy? > From jwcolby at colbyconsulting.com Wed Jun 4 11:49:51 2008 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 04 Jun 2008 12:49:51 -0400 Subject: [dba-SQLServer] Where do you put generic stored procedures In-Reply-To: <29f585dd0806040926q27abb216ud8f17a7728045264@mail.gmail.com> References: <4846BD57.1000708@colbyconsulting.com> <29f585dd0806040926q27abb216ud8f17a7728045264@mail.gmail.com> Message-ID: <4846C7AF.6040402@colbyconsulting.com> Arthur, Are you in your new home now? From previous emails it sounded like this was move week. So what do you think of the "have your own 'company' database" into which you throw these things. They would be all in one place now. John W. Colby www.ColbyConsulting.com Arthur Fuller wrote: > Like all other things in SQL Server, there are several approaches. One is to > place them in Master, where they will always be found. This demands that you > back up master every time you make a change. Far too people IME back up > master, which is plain silly, but that seems to be how it often is in the > wild. > > The down side to this approach is that if you supply the database to a > client or similar, you'll need a script to generate the sprocs on the client > machine. That's a not a large task, but I do recommend that you name the > sprocs in a way that makes them easy to identify. While I'm on the naming > topic, it's a best practice never to name a sproc with the prefix "sp". That > is for MicroSoft. You might name yours "gp_*", which would isolate them > alphabetically so you can easily grab them all. > > A second approach, which I like for rock-solid things and which avoids the > problem above, is to place them in Model. Model is so named because it is > the template from which all new databases are created. You'll still have to > create a script to create them in existing databases, but all future ones > will automatically contain them (and anything else you add to Model). I have > even created a Model that is the template for your standard order-entry > database, with the tables for Customer, Order, Order Details, Product, etc. > already in there. I might have to modify a few columns, but most of the > grunt work gets done automatically using this method. > > Of course, this method has a down side, too. Should you update one of your > sprocs, you'll need to revisit the other databases and alter the sprocs in > them. > > Choose your poison. > > hth, > Arthur > > On Wed, Jun 4, 2008 at 1:05 PM, jwcolby wrote: > >> I have a half dozen databases, which I am creating standard >> field names for, and then creating parameterized stored >> procedures to allow me to do things like drop specific >> indexes, rebuild those indexes, update a set of hash fields etc. >> >> Where would I put these stored procedures. I understand >> some people do not think putting them in the Master database >> is a good idea, but they do not "belong" to any of the >> specific databases either. >> >> Do you create your own database and place them in there? >> Some other strategy? >> >> -- >> John W. Colby >> www.ColbyConsulting.com >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Thu Jun 5 07:14:24 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 5 Jun 2008 09:14:24 -0300 Subject: [dba-SQLServer] Where do you put generic stored procedures In-Reply-To: <4846C7AF.6040402@colbyconsulting.com> References: <4846BD57.1000708@colbyconsulting.com> <29f585dd0806040926q27abb216ud8f17a7728045264@mail.gmail.com> <4846C7AF.6040402@colbyconsulting.com> Message-ID: <29f585dd0806050514y7981f648n83d66ddd6684a836@mail.gmail.com> Yes I'm in my new home and on day four of my new work environment. The company found me "temporary" digs (three month lease) so I had a starting point from which to investigate the alternatives. It's a one-bedroom flat with marble floors and a yard with lots of plants and several walk-in closets, and it's a five-minute walk from a stunningly gorgeous beach called Coco Reef. There's a luxury resort hotel there, from which I'm renting the flat, so yesterday I went there to have dinner on the deck. Brought my notebook and sat facing the ocean and effortlessly hooked into their wireless and wow, that's my notion of an office! I can definitely grow used to this. The 'company' database works too. There are tradeoffs in every approach. You can call sprocs in db1 from db2 by specifically citing the full object name, but then if you ship the db to a client then you'll have to ship two dbs not one. Not that there's anything wrong with that, in fact it might have the same advantage as it does in Access (e.g. libraries). Arthur On Wed, Jun 4, 2008 at 1:49 PM, jwcolby wrote: > Arthur, > > Are you in your new home now? From previous emails it > sounded like this was move week. > > So what do you think of the "have your own 'company' > database" into which you throw these things. They would be > all in one place now. > > John W. Colby > From jwcolby at colbyconsulting.com Thu Jun 5 07:33:23 2008 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 05 Jun 2008 08:33:23 -0400 Subject: [dba-SQLServer] Where do you put generic stored procedures In-Reply-To: <29f585dd0806050514y7981f648n83d66ddd6684a836@mail.gmail.com> References: <4846BD57.1000708@colbyconsulting.com> <29f585dd0806040926q27abb216ud8f17a7728045264@mail.gmail.com> <4846C7AF.6040402@colbyconsulting.com> <29f585dd0806050514y7981f648n83d66ddd6684a836@mail.gmail.com> Message-ID: <4847DD13.5090106@colbyconsulting.com> Arthur, I am not shipping databases so any discussion of that is a moo point to quote Joey from Friends. I have many different databases, all running ATM on the same server, though they are backed up to a neighboring server. I am standardizing the tables in the individual databases such that each has exactly the same field names and set of fields. Thus I need a single set of SProcs that work on any of the databases. I do not want to have to "go" to that database to get them to run, in fact I want to "go" to a single database and be able to run the SPRoc on any of my databases. Thus it appears that the "company" database is the answer. If I do need to move any or all of the databases to another server, then I just copy the company database along with the individual databases. The living accommodations sound delightful. Work hard, make a lot of money and enjoy your time in the sun. John W. Colby www.ColbyConsulting.com Arthur Fuller wrote: > Yes I'm in my new home and on day four of my new work environment. The > company found me "temporary" digs (three month lease) so I had a starting > point from which to investigate the alternatives. It's a one-bedroom flat > with marble floors and a yard with lots of plants and several walk-in > closets, and it's a five-minute walk from a stunningly gorgeous beach called > Coco Reef. There's a luxury resort hotel there, from which I'm renting the > flat, so yesterday I went there to have dinner on the deck. Brought my > notebook and sat facing the ocean and effortlessly hooked into their > wireless and wow, that's my notion of an office! I can definitely grow used > to this. > > The 'company' database works too. There are tradeoffs in every approach. You > can call sprocs in db1 from db2 by specifically citing the full object name, > but then if you ship the db to a client then you'll have to ship two dbs not > one. Not that there's anything wrong with that, in fact it might have the > same advantage as it does in Access (e.g. libraries). > > Arthur > > On Wed, Jun 4, 2008 at 1:49 PM, jwcolby wrote: > >> Arthur, >> >> Are you in your new home now? From previous emails it >> sounded like this was move week. >> >> So what do you think of the "have your own 'company' >> database" into which you throw these things. They would be >> all in one place now. >> >> John W. Colby >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Fri Jun 6 09:20:06 2008 From: jwcolby at colbyconsulting.com (jwcolby) Date: Fri, 06 Jun 2008 10:20:06 -0400 Subject: [dba-SQLServer] Put humpty dumpty back together again Message-ID: <48494796.2060006@colbyconsulting.com> I have a database that has split the address line into HouseNumberPrefix HouseNumber HouseNumberSuffix Direction StreetName Mode (N, NW etc) Quadrant Appt# I need to put Humpty back together again to feed off to Address Validation. How would I do that in SQL? I THINK I can just append them all together with spaces between the parts and that would be fine EXCEPT that when you do something like NULL + SomeString you end up with null. How would I do what I am trying to do in SQL? -- John W. Colby www.ColbyConsulting.com From ssharkins at gmail.com Fri Jun 6 12:21:12 2008 From: ssharkins at gmail.com (Susan Harkins) Date: Fri, 6 Jun 2008 13:21:12 -0400 Subject: [dba-SQLServer] deleting duplicates across multiple tables Message-ID: <021101c8c7f9$fd5f1430$2f8601c7@SusanOne> http://blogs.techrepublic.com.com/datacenter/?p=372 The above article discusses a simple technique for deleting duplicates in a single table. A reader wants to know how to expand it to deal with related tables. You can use referential integrity to prevent orphans, but... if the duplicates have related duplicates, you still end up with duplicates -- any idea how to expand this? Susan H. From fuller.artful at gmail.com Fri Jun 6 15:49:25 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 6 Jun 2008 17:49:25 -0300 Subject: [dba-SQLServer] deleting duplicates across multiple tables In-Reply-To: <021101c8c7f9$fd5f1430$2f8601c7@SusanOne> References: <021101c8c7f9$fd5f1430$2f8601c7@SusanOne> Message-ID: <29f585dd0806061349u5e5db23ep262602369f9435d0@mail.gmail.com> Cascade delete would automatically take care of it. You could also code a delete trigger. It would grab the value of the PK from the inserted table then delete matching values in the related tables. Not much coding required. This would "automate" future deletes, but obviously can't do anything about rows already deleted. However, you could find the orphaned child rows using an outer join from the child table and criteria of "Parent.PK IS NULL". hth, Arthur On Fri, Jun 6, 2008 at 2:21 PM, Susan Harkins wrote: > http://blogs.techrepublic.com.com/datacenter/?p=372 > > The above article discusses a simple technique for deleting duplicates in a > single table. A reader wants to know how to expand it to deal with related > tables. > > You can use referential integrity to prevent orphans, but... if the > duplicates have related duplicates, you still end up with duplicates -- any > idea how to expand this? > > Susan H. > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From stuart at lexacorp.com.pg Fri Jun 6 19:40:16 2008 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Sat, 07 Jun 2008 10:40:16 +1000 Subject: [dba-SQLServer] Put humpty dumpty back together again In-Reply-To: <48494796.2060006@colbyconsulting.com> References: <48494796.2060006@colbyconsulting.com> Message-ID: <484A6590.29195.3E64223@stuart.lexacorp.com.pg> Wrap each potyential problem field in an ISNULL() It's the equivalent of the VBA NZ() On 6 Jun 2008 at 10:20, jwcolby wrote: > I have a database that has split the address line into > > HouseNumberPrefix > HouseNumber > HouseNumberSuffix > Direction > StreetName > Mode (N, NW etc) > Quadrant > Appt# > > I need to put Humpty back together again to feed off to > Address Validation. How would I do that in SQL? > > I THINK I can just append them all together with spaces > between the parts and that would be fine EXCEPT that when > you do something like NULL + SomeString you end up with null. > > How would I do what I am trying to do in SQL? > > -- > John W. Colby > www.ColbyConsulting.com > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From bheid at sc.rr.com Sun Jun 8 13:23:39 2008 From: bheid at sc.rr.com (Bobby Heid) Date: Sun, 8 Jun 2008 14:23:39 -0400 Subject: [dba-SQLServer] Put humpty dumpty back together again In-Reply-To: <48494796.2060006@colbyconsulting.com> References: <48494796.2060006@colbyconsulting.com> Message-ID: <003a01c8c994$c6436ef0$52ca4cd0$@rr.com> John, I think you want something along the lines of: Isnull(HouseNumberPrefix,'') + ' ' + Isnull(HouseNumber,'') + ' ' + ... Of course, you'd have to figure out what to do with the empty spaces for the empty fields. Bobby -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby Sent: Friday, June 06, 2008 10:20 AM To: Access Developers discussion and problem solving; Dba-Sqlserver Subject: [dba-SQLServer] Put humpty dumpty back together again I have a database that has split the address line into HouseNumberPrefix HouseNumber HouseNumberSuffix Direction StreetName Mode (N, NW etc) Quadrant Appt# I need to put Humpty back together again to feed off to Address Validation. How would I do that in SQL? I THINK I can just append them all together with spaces between the parts and that would be fine EXCEPT that when you do something like NULL + SomeString you end up with null. How would I do what I am trying to do in SQL? -- John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Sun Jun 8 14:39:35 2008 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sun, 08 Jun 2008 15:39:35 -0400 Subject: [dba-SQLServer] Put humpty dumpty back together again In-Reply-To: <003a01c8c994$c6436ef0$52ca4cd0$@rr.com> References: <48494796.2060006@colbyconsulting.com> <003a01c8c994$c6436ef0$52ca4cd0$@rr.com> Message-ID: <484C3577.3030702@colbyconsulting.com> Yes, that is what I want. I don't think the extra spaces matter for my purposes, though I am not entirely sure either. The entire point of the exercise is to get a single address line to hand off to another program to do address validation on. How will that program handle extra spaces in there? I have no clue. The bigger problem is that I have to do 97 million addresses, and have 7 pieces to append which means close to 700 million calls to isnull(). I am trying very hard not to "pre-process" this simply because I get weekly updates to this database and any preprocessing I do to the main, I have to do the updates. I would simply update nulls to '' for all the fields (which I did to another db) but that preprocessing would then have to be done to every update. John W. Colby www.ColbyConsulting.com Bobby Heid wrote: > John, > > I think you want something along the lines of: > > Isnull(HouseNumberPrefix,'') + ' ' + Isnull(HouseNumber,'') + ' ' + ... > > Of course, you'd have to figure out what to do with the empty spaces for the > empty fields. > > Bobby > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby > Sent: Friday, June 06, 2008 10:20 AM > To: Access Developers discussion and problem solving; Dba-Sqlserver > Subject: [dba-SQLServer] Put humpty dumpty back together again > > I have a database that has split the address line into > > HouseNumberPrefix > HouseNumber > HouseNumberSuffix > Direction > StreetName > Mode (N, NW etc) > Quadrant > Appt# > > I need to put Humpty back together again to feed off to > Address Validation. How would I do that in SQL? > > I THINK I can just append them all together with spaces > between the parts and that would be fine EXCEPT that when > you do something like NULL + SomeString you end up with null. > > How would I do what I am trying to do in SQL? > From robert at webedb.com Sun Jun 8 16:36:24 2008 From: robert at webedb.com (Robert L. Stewart) Date: Sun, 08 Jun 2008 16:36:24 -0500 Subject: [dba-SQLServer] Put humpty dumpty back together again In-Reply-To: References: Message-ID: <200806082143.m58Lgt19022492@databaseadvisors.com> Try the ISNULL function At 12:00 PM 6/6/2008, you wrote: >Date: Fri, 06 Jun 2008 10:20:06 -0400 >From: jwcolby >Subject: [dba-SQLServer] Put humpty dumpty back together again >To: Access Developers discussion and problem solving > , Dba-Sqlserver > >Message-ID: <48494796.2060006 at colbyconsulting.com> >Content-Type: text/plain; charset=ISO-8859-1; format=flowed > >I have a database that has split the address line into > >HouseNumberPrefix >HouseNumber >HouseNumberSuffix >Direction >StreetName >Mode (N, NW etc) >Quadrant >Appt# > >I need to put Humpty back together again to feed off to >Address Validation. How would I do that in SQL? > >I THINK I can just append them all together with spaces >between the parts and that would be fine EXCEPT that when >you do something like NULL + SomeString you end up with null. > >How would I do what I am trying to do in SQL? > >-- >John W. Colby >www.ColbyConsulting.com > > >------------------------------ > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > >End of dba-SQLServer Digest, Vol 64, Issue 5 >******************************************** From paul.hartland at googlemail.com Mon Jun 9 03:19:46 2008 From: paul.hartland at googlemail.com (Paul Hartland) Date: Mon, 9 Jun 2008 09:19:46 +0100 Subject: [dba-SQLServer] Put humpty dumpty back together again In-Reply-To: <003a01c8c994$c6436ef0$52ca4cd0$@rr.com> References: <48494796.2060006@colbyconsulting.com> <003a01c8c994$c6436ef0$52ca4cd0$@rr.com> Message-ID: <38c884770806090119j7c62d24ala4d7b0ef259c6705@mail.gmail.com> Or you could try something like: SELECT CASE WHEN HouseNumberPrefix IS NULL THEN '' ELSE HouseNumberPrefix + ' ' END + CASE WHEN HouseNumber IS NULL THEN '' ELSE HouseNumber + ' ' END + CASE WHEN HouseNumberSuffix IS NULL THEN '' ELSE HouseNumberSuffix + ' ' END Etc Etc AS MainAddress This should give you an address line without additional spaces.... Paul Hartland 2008/6/8 Bobby Heid : > John, > > I think you want something along the lines of: > > Isnull(HouseNumberPrefix,'') + ' ' + Isnull(HouseNumber,'') + ' ' + ... > > Of course, you'd have to figure out what to do with the empty spaces for > the > empty fields. > > Bobby > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby > Sent: Friday, June 06, 2008 10:20 AM > To: Access Developers discussion and problem solving; Dba-Sqlserver > Subject: [dba-SQLServer] Put humpty dumpty back together again > > I have a database that has split the address line into > > HouseNumberPrefix > HouseNumber > HouseNumberSuffix > Direction > StreetName > Mode (N, NW etc) > Quadrant > Appt# > > I need to put Humpty back together again to feed off to > Address Validation. How would I do that in SQL? > > I THINK I can just append them all together with spaces > between the parts and that would be fine EXCEPT that when > you do something like NULL + SomeString you end up with null. > > How would I do what I am trying to do in SQL? > > -- > John W. Colby > www.ColbyConsulting.com > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- Paul Hartland paul.hartland at googlemail.com From markamatte at hotmail.com Mon Jun 9 14:34:48 2008 From: markamatte at hotmail.com (Mark A Matte) Date: Mon, 9 Jun 2008 19:34:48 +0000 Subject: [dba-SQLServer] List of Search Words In-Reply-To: <38c884770806090119j7c62d24ala4d7b0ef259c6705@mail.gmail.com> References: <48494796.2060006@colbyconsulting.com> <003a01c8c994$c6436ef0$52ca4cd0$@rr.com> <38c884770806090119j7c62d24ala4d7b0ef259c6705@mail.gmail.com> Message-ID: Hello All, I have about 10 million records in which I need to search multiple fields. I have a list of around 200 words. Some of these fields are large text fields. On a smaller scale, in Access, I have used a cartesian join to do this... I can write SQL for about 20 words at a time...and just modify for the different fields(I have a script to run them all). Any suggestions on my best approach to these 'wild card' searches? Thanks, Mark A. Matte _________________________________________________________________ Enjoy 5 GB of free, password-protected online storage. http://www.windowslive.com/skydrive/overview.html?ocid=TXT_TAGLM_WL_Refresh_skydrive_062008 From michael at ddisolutions.com.au Mon Jun 9 18:38:14 2008 From: michael at ddisolutions.com.au (Michael Maddison) Date: Tue, 10 Jun 2008 09:38:14 +1000 Subject: [dba-SQLServer] List of Search Words References: <48494796.2060006@colbyconsulting.com><003a01c8c994$c6436ef0$52ca4cd0$@rr.com> <38c884770806090119j7c62d24ala4d7b0ef259c6705@mail.gmail.com> Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D013BF99A@ddi-01.DDI.local> Hi Mark, Sounds like a job for Full-Text Search/Catalogs. Been a long time since I used it myself but IIRC it works like a charm... cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Mark A Matte Sent: Tuesday, 10 June 2008 5:35 AM To: Discussion concerning MS SQL Server Subject: [dba-SQLServer] List of Search Words Hello All, I have about 10 million records in which I need to search multiple fields. I have a list of around 200 words. Some of these fields are large text fields. On a smaller scale, in Access, I have used a cartesian join to do this... I can write SQL for about 20 words at a time...and just modify for the different fields(I have a script to run them all). Any suggestions on my best approach to these 'wild card' searches? Thanks, Mark A. Matte _________________________________________________________________ Enjoy 5 GB of free, password-protected online storage. http://www.windowslive.com/skydrive/overview.html?ocid=TXT_TAGLM_WL_Refr esh_skydrive_062008 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Tue Jun 10 11:27:44 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 10 Jun 2008 13:27:44 -0300 Subject: [dba-SQLServer] Constant Poll: Approaches Message-ID: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> I have a pair of tables, call them Fiction and Fact. People around the world enter rows into Fiction. The database has to process these records and depending on some logic, it makes a decision whether or not to enter a similar row into Fact. The logic is not important to my question. What I'm trying to do is set up a "polling" system so that the engine will examine the Fiction table every 10 seconds or so and if there are any new rows, fire the logic that decides whether to create a row in the Fact table, and then timestamp the Fiction row so we know that it's been processed. One important detail in this operation is that I cannot move to row 2 before completely processing row 1, because the results of row 1 may affect the outcome of row 2. I can think of a couple of approaches -- agent job, trigger, proc with an infinite loop. But before I get started coding this, I thought that I'd reach out and see if anyone's done something similar and has advice on which approach might be best. Thanks in advance for any suggestions. Arthur From robert at webedb.com Tue Jun 10 12:24:22 2008 From: robert at webedb.com (Robert L. Stewart) Date: Tue, 10 Jun 2008 12:24:22 -0500 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: References: Message-ID: <200806101725.m5AHPpWI024523@databaseadvisors.com> Arthur, I would use a proc that uses a job for the scheduling of it. We do something similar for processing rules against an external interface for documents. Robert At 12:00 PM 6/10/2008, you wrote: >Date: Tue, 10 Jun 2008 13:27:44 -0300 >From: "Arthur Fuller" >Subject: [dba-SQLServer] Constant Poll: Approaches >To: "Discussion concerning MS SQL Server" > >Message-ID: > <29f585dd0806100927s168fff34j56f39d0539b543ab at mail.gmail.com> >Content-Type: text/plain; charset=ISO-8859-1 > >I have a pair of tables, call them Fiction and Fact. People around the world >enter rows into Fiction. The database has to process these records and >depending on some logic, it makes a decision whether or not to enter a >similar row into Fact. The logic is not important to my question. What I'm >trying to do is set up a "polling" system so that the engine will examine >the Fiction table every 10 seconds or so and if there are any new rows, fire >the logic that decides whether to create a row in the Fact table, and then >timestamp the Fiction row so we know that it's been processed. > >One important detail in this operation is that I cannot move to row 2 before >completely processing row 1, because the results of row 1 may affect the >outcome of row 2. > >I can think of a couple of approaches -- agent job, trigger, proc with an >infinite loop. But before I get started coding this, I thought that I'd >reach out and see if anyone's done something similar and has advice on which >approach might be best. > >Thanks in advance for any suggestions. > >Arthur From David at sierranevada.com Tue Jun 10 12:18:50 2008 From: David at sierranevada.com (David Lewis) Date: Tue, 10 Jun 2008 10:18:50 -0700 Subject: [dba-SQLServer] Constant Poll: Approaches (Arthur Fuller) In-Reply-To: References: Message-ID: Hi Arthur: This is best done using SSIS (formerly DTS). All the functionality you require is built in to the components. Once the package is built you would schedule it using an agent job. HTH. D. Lewis ------------------------------ Message: 3 Date: Tue, 10 Jun 2008 13:27:44 -0300 From: "Arthur Fuller" Subject: [dba-SQLServer] Constant Poll: Approaches To: "Discussion concerning MS SQL Server" Message-ID: <29f585dd0806100927s168fff34j56f39d0539b543ab at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 I have a pair of tables, call them Fiction and Fact. People around the world enter rows into Fiction. The database has to process these records and depending on some logic, it makes a decision whether or not to enter a similar row into Fact. The logic is not important to my question. What I'm trying to do is set up a "polling" system so that the engine will examine the Fiction table every 10 seconds or so and if there are any new rows, fire the logic that decides whether to create a row in the Fact table, and then timestamp the Fiction row so we know that it's been processed. One important detail in this operation is that I cannot move to row 2 before completely processing row 1, because the results of row 1 may affect the outcome of row 2. I can think of a couple of approaches -- agent job, trigger, proc with an infinite loop. But before I get started coding this, I thought that I'd reach out and see if anyone's done something similar and has advice on which approach might be best. Thanks in advance for any suggestions. Arthur ------------------------------ The contents of this e-mail message and its attachments are covered by the Electronic Communications Privacy Act (18 U.S.C. 2510-2521) and are intended solely for the addressee(s) hereof. If you are not the named recipient, or the employee or agent responsible for delivering the message to the intended recipient, or if this message has been addressed to you in error, you are directed not to read, disclose, reproduce, distribute, disseminate or otherwise use this transmission. If you have received this communication in error, please notify us immediately by return e-mail or by telephone, 530-893-3520, and delete and/or destroy all copies of the message immediately. From fuller.artful at gmail.com Tue Jun 10 13:57:59 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 10 Jun 2008 15:57:59 -0300 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> Message-ID: <29f585dd0806101157u4ec99c8axab7364a6c104a8fb@mail.gmail.com> What other email? Arthur On Tue, Jun 10, 2008 at 3:37 PM, Dejan Sunderic wrote: > Isn't this scenario for SQL Broker? > > Btw, what is this other email? > > > Dejan > > From Elizabeth.J.Doering at wellsfargo.com Tue Jun 10 14:16:09 2008 From: Elizabeth.J.Doering at wellsfargo.com (Elizabeth.J.Doering at wellsfargo.com) Date: Tue, 10 Jun 2008 14:16:09 -0500 Subject: [dba-SQLServer] Encryption & SQL Server 2005 References: Message-ID: Thanks for your response Francisco. I'm totally sidetracked from this right now, but I will probably get back to it next week or so. I'll have the hardware types check for differences in set ups. Thanks, Liz -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Wednesday, May 28, 2008 5:49 PM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Encryption & SQL Server 2005 This is a puzzler, a while back I had to create a system that would encrypt credit card numbers, but I had chosen an outside algorithm so I was not using the MS encryption routines. This provided me with two choices for encrypted data, one where it was all hexidecimal and I was able to easily use the searched id encrypted in my own sipher, then used the results to search the database. I'm going to guess you don't get this luxury, but none the less I wanted to chime in with what I have done in the past. What kind of hardware are you dealing with? -- Francisco On Wed, May 28, 2008 at 2:38 PM, wrote: > Dear List, > > This ought to be simple: > > I have 90000+ records in a table called CallerAccount. There's also > an identity primary key, CallerAccountID, and a foreign key CallerID > which links to the rest of the information about our caller. > > Not surprisingly, I need to search for records in the table pertaining > to one account. This may actually span several calls, and several > callers, and many days, so there may be several entries with the same > account number. > > If I were searching cleartext account numbers, this would be a piece > of > cake: Select * from CallerAccount where AccountNumber = '1234567890'. > > Sadly, I am not searching cleartext account numbers. AccountNumber is > a sensitive piece of data around here, so it is necessary that it be > encrypted, hence the table also contains a field EAccountNumber, > containing the binary encrypted value for AccountNumber. > > I could decrypt all the account numbers in the table, write them to a > new field and search that. But since 90000+ records is only the very > beginning of this table, it seemed like it OUGHT to be more sensible > to encrypt the one account number that I know, using the same > certificate and key that I have used on the whole table, then search > for the matching encrypted value. To make this easy, I wrote the > encrypted value out to another table. Then, I thought, I could arrive > at the records I am interested in with a join like this: > > SELECT CallerAccount.CallerID, > CallerAccount.EAccountNumber > FROM CallerAccount INNER JOIN > tempAccount ON > CallerAccount.EAccountNumber = tempAccount.tempEAccountNumber > > Apparently however, the same data encrypted with the same certificate > and the same key does not actually turn out to the same binary value > twice. It's taken me all afternoon to arrive, kicking and screaming, > at this conclusion, but I suppose it makes sense. So I'm at a > standstill, back at decrypting all the records in the table before searching. > > Do any of you have any advice, workarounds, wisdom or comfort for me? > > Thanks, > > > Liz > > > Liz Doering > elizabeth.j.doering at wellsfargo.com > > 612.667.2447 > > This message may contain confidential and/or privileged information. > If you are not the addressee or authorized to receive this for the > addressee, you must not use, copy, disclose, or take any action based > on this message or any information herein. If you have received this > message in error, please advise the sender immediately by reply e-mail > and delete this message. Thank you for your cooperation. > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Tue Jun 10 16:35:33 2008 From: accessd at shaw.ca (Jim Lawrence) Date: Tue, 10 Jun 2008 14:35:33 -0700 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> Message-ID: <685D241F89E949CE9A1A6862C1382C3C@creativesystemdesigns.com> Hi Arthur: Any info on that process would be great as I have been working on a similar message system. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Tuesday, June 10, 2008 9:28 AM To: Discussion concerning MS SQL Server Subject: [dba-SQLServer] Constant Poll: Approaches I have a pair of tables, call them Fiction and Fact. People around the world enter rows into Fiction. The database has to process these records and depending on some logic, it makes a decision whether or not to enter a similar row into Fact. The logic is not important to my question. What I'm trying to do is set up a "polling" system so that the engine will examine the Fiction table every 10 seconds or so and if there are any new rows, fire the logic that decides whether to create a row in the Fact table, and then timestamp the Fiction row so we know that it's been processed. One important detail in this operation is that I cannot move to row 2 before completely processing row 1, because the results of row 1 may affect the outcome of row 2. I can think of a couple of approaches -- agent job, trigger, proc with an infinite loop. But before I get started coding this, I thought that I'd reach out and see if anyone's done something similar and has advice on which approach might be best. Thanks in advance for any suggestions. Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Tue Jun 10 16:56:38 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 10 Jun 2008 14:56:38 -0700 Subject: [dba-SQLServer] Encryption & SQL Server 2005 In-Reply-To: References: Message-ID: I did some re-search on encryption and many sites touted that encrypting at the database level was considered over-kill. It maybe because of the high level of processing power required just to arrive at the same answer. Additionally one article that I read made sense in stating that access to the data should be restricted before you get to the server that way encrypting the data would not be necessary. This is true you could locate the column for these accounts and deny access to the general user accounts and only enable access to those accounts that will need to review the data. You can further do this if you have your application switch to a specific Application login ID that allows access, but for all other general tasks uses the default user logins. If you need to stick to the encrypted data level then, one method I was thinking that you could use would be to create a decryption function so that you could run your search in the following way: SELECT CallerAccount.CallerID, CallerAccount.EAccountNumber FROM CallerAccount WHERE dbo.myDecryptFunct(CallerAccount.EAccountNumber, 'DecryptKEY') = 'AccountSearch' This would decrypt only the account number through the search and not have them available anywhere on the db. Of course this also depends on where your TempDB is at, and how much ram and processing power you have on your server. On Tue, Jun 10, 2008 at 12:16 PM, wrote: > > Thanks for your response Francisco. I'm totally sidetracked from this > right now, but I will probably get back to it next week or so. I'll > have the hardware types check for differences in set ups. > > Thanks, > > > Liz > > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > Francisco Tapia > Sent: Wednesday, May 28, 2008 5:49 PM > To: Discussion concerning MS SQL Server > Subject: Re: [dba-SQLServer] Encryption & SQL Server 2005 > > This is a puzzler, a while back I had to create a system that would > encrypt credit card numbers, but I had chosen an outside algorithm so I > was not using the MS encryption routines. This provided me with two > choices for encrypted data, one where it was all hexidecimal and I was > able to easily use the searched id encrypted in my own sipher, then used > the results to search the database. I'm going to guess you don't get > this luxury, but none the less I wanted to chime in with what I have > done in the past. What kind of hardware are you dealing with? > > -- > Francisco > > On Wed, May 28, 2008 at 2:38 PM, > wrote: > > > Dear List, > > > > This ought to be simple: > > > > I have 90000+ records in a table called CallerAccount. There's also > > an identity primary key, CallerAccountID, and a foreign key CallerID > > which links to the rest of the information about our caller. > > > > Not surprisingly, I need to search for records in the table pertaining > > > to one account. This may actually span several calls, and several > > callers, and many days, so there may be several entries with the same > > account number. > > > > If I were searching cleartext account numbers, this would be a piece > > of > > cake: Select * from CallerAccount where AccountNumber = '1234567890'. > > > > Sadly, I am not searching cleartext account numbers. AccountNumber is > > > a sensitive piece of data around here, so it is necessary that it be > > encrypted, hence the table also contains a field EAccountNumber, > > containing the binary encrypted value for AccountNumber. > > > > I could decrypt all the account numbers in the table, write them to a > > new field and search that. But since 90000+ records is only the very > > beginning of this table, it seemed like it OUGHT to be more sensible > > to encrypt the one account number that I know, using the same > > certificate and key that I have used on the whole table, then search > > for the matching encrypted value. To make this easy, I wrote the > > encrypted value out to another table. Then, I thought, I could arrive > > > at the records I am interested in with a join like this: > > > > SELECT CallerAccount.CallerID, > > CallerAccount.EAccountNumber > > FROM CallerAccount INNER JOIN > > tempAccount ON > > CallerAccount.EAccountNumber = tempAccount.tempEAccountNumber > > > > Apparently however, the same data encrypted with the same certificate > > and the same key does not actually turn out to the same binary value > > twice. It's taken me all afternoon to arrive, kicking and screaming, > > at this conclusion, but I suppose it makes sense. So I'm at a > > standstill, back at decrypting all the records in the table before > searching. > > > > Do any of you have any advice, workarounds, wisdom or comfort for me? > > > > Thanks, > > > > > > Liz > > > > > > Liz Doering > > elizabeth.j.doering at wellsfargo.com > > > > 612.667.2447 > > > > This message may contain confidential and/or privileged information. > > If you are not the addressee or authorized to receive this for the > > addressee, you must not use, copy, disclose, or take any action based > > on this message or any information herein. If you have received this > > message in error, please advise the sender immediately by reply e-mail > > > and delete this message. Thank you for your cooperation. > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > -- > -Francisco > http://sqlthis.blogspot.com | Tsql and More... > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From fhtapia at gmail.com Tue Jun 10 17:02:33 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 10 Jun 2008 15:02:33 -0700 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> Message-ID: What is the maximum amount of latency for the data? I had a similar operation not too long ago when I still supported our in-house legacy application. I created a tempTable for orders. The purpose was to enable pre-built in features of Access and was able to bind the table to a subform. When a data entry user would type in a new part into the line, the system would either generate a NEW lineitem ID if the part was for a new SerialNumber machine, or issue the same LineItemID if the SN was one that was already typed into the order. ie: SN: 123456; RRID: 1000, Qty: 1, Part: ABC SN: 654321; RRID: 2000, Qty: 1, Part: ABC SN 123456; RRID: 1000, Qty: 2, Part: XYZ ...etc. This was accomplished by a trigger on the temptabledetail and temptableheader. When an order was completed and submitted, the data was then sent to a webservice for processing in our legacy system and once the reply was received it would update all temp details/headers into the main live tables. On Tue, Jun 10, 2008 at 9:27 AM, Arthur Fuller wrote: > I have a pair of tables, call them Fiction and Fact. People around the > world > enter rows into Fiction. The database has to process these records and > depending on some logic, it makes a decision whether or not to enter a > similar row into Fact. The logic is not important to my question. What I'm > trying to do is set up a "polling" system so that the engine will examine > the Fiction table every 10 seconds or so and if there are any new rows, > fire > the logic that decides whether to create a row in the Fact table, and then > timestamp the Fiction row so we know that it's been processed. > > One important detail in this operation is that I cannot move to row 2 > before > completely processing row 1, because the results of row 1 may affect the > outcome of row 2. > > I can think of a couple of approaches -- agent job, trigger, proc with an > infinite loop. But before I get started coding this, I thought that I'd > reach out and see if anyone's done something similar and has advice on > which > approach might be best. > > Thanks in advance for any suggestions. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From stuart at lexacorp.com.pg Tue Jun 10 17:40:03 2008 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Wed, 11 Jun 2008 08:40:03 +1000 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> Message-ID: <484F8F63.31413.1811A113@stuart.lexacorp.com.pg> Triggers are probably out of the question because you need to process the rows sequentially. Procs with Infinite loops tend to be resource hogs and it's not so easy to program complex business logic in TSQL. I'd go with an external agent. I've got a couple of similar things running. Both use ODBC for the connection and a front end application or service which sits in the background and periodically polls the data base, applies the logic and updates where required. I generally write this sort of thing in PowerBasic with the SQL Tools ODBC package but any environment which can generate small footprint executables, can access ODBC, can "sleep" and can spawn threads will do it. On 10 Jun 2008 at 13:27, Arthur Fuller wrote: > I have a pair of tables, call them Fiction and Fact. People around the world > enter rows into Fiction. The database has to process these records and > depending on some logic, it makes a decision whether or not to enter a > similar row into Fact. The logic is not important to my question. What I'm > trying to do is set up a "polling" system so that the engine will examine > the Fiction table every 10 seconds or so and if there are any new rows, fire > the logic that decides whether to create a row in the Fact table, and then > timestamp the Fiction row so we know that it's been processed. > > One important detail in this operation is that I cannot move to row 2 before > completely processing row 1, because the results of row 1 may affect the > outcome of row 2. > > I can think of a couple of approaches -- agent job, trigger, proc with an > infinite loop. But before I get started coding this, I thought that I'd > reach out and see if anyone's done something similar and has advice on which > approach might be best. > > Thanks in advance for any suggestions. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fuller.artful at gmail.com Wed Jun 11 05:56:26 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 11 Jun 2008 07:56:26 -0300 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: <484F8F63.31413.1811A113@stuart.lexacorp.com.pg> References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> <484F8F63.31413.1811A113@stuart.lexacorp.com.pg> Message-ID: <29f585dd0806110356w342c768ch49dfc998b65f27ce@mail.gmail.com> Thanks for the input, Stuart. Currently I'm thinking that I should try what I need using SSIS (small footprint, external process), but I'm still open on the subject. Arthur On Tue, Jun 10, 2008 at 7:40 PM, Stuart McLachlan wrote: > Triggers are probably out of the question because you need to process the > rows > sequentially. > > Procs with Infinite loops tend to be resource hogs and it's not so easy to > program complex > business logic in TSQL. > > I'd go with an external agent. > > I've got a couple of similar things running. Both use ODBC for the > connection and a front > end application or service which sits in the background and periodically > polls the data base, > applies the logic and updates where required. > > I generally write this sort of thing in PowerBasic with the SQL Tools ODBC > package but any > environment which can generate small footprint executables, can access > ODBC, can > "sleep" and can spawn threads will do it. > > From markamatte at hotmail.com Thu Jun 12 09:27:16 2008 From: markamatte at hotmail.com (Mark A Matte) Date: Thu, 12 Jun 2008 14:27:16 +0000 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: <29f585dd0806110356w342c768ch49dfc998b65f27ce@mail.gmail.com> References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> <484F8F63.31413.1811A113@stuart.lexacorp.com.pg> <29f585dd0806110356w342c768ch49dfc998b65f27ce@mail.gmail.com> Message-ID: Arthur, Just to simplify the question ( for my benifit)... "For each new record in tblFiction, you want to analyze EVERY existing record in tblFiction, do something, and mark that record in tblFiction complete? Thanks, Mark A. Matte > Date: Wed, 11 Jun 2008 07:56:26 -0300 > From: fuller.artful at gmail.com > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Constant Poll: Approaches > > Thanks for the input, Stuart. Currently I'm thinking that I should try what > I need using SSIS (small footprint, external process), but I'm still open on > the subject. > > Arthur > > On Tue, Jun 10, 2008 at 7:40 PM, Stuart McLachlan > wrote: > >> Triggers are probably out of the question because you need to process the >> rows >> sequentially. >> >> Procs with Infinite loops tend to be resource hogs and it's not so easy to >> program complex >> business logic in TSQL. >> >> I'd go with an external agent. >> >> I've got a couple of similar things running. Both use ODBC for the >> connection and a front >> end application or service which sits in the background and periodically >> polls the data base, >> applies the logic and updates where required. >> >> I generally write this sort of thing in PowerBasic with the SQL Tools ODBC >> package but any >> environment which can generate small footprint executables, can access >> ODBC, can >> "sleep" and can spawn threads will do it. >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > _________________________________________________________________ Instantly invite friends from Facebook and other social networks to join you on Windows Live? Messenger. https://www.invite2messenger.net/im/?source=TXT_EML_WLH_InviteFriends From fuller.artful at gmail.com Thu Jun 12 10:41:51 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 12 Jun 2008 12:41:51 -0300 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> <484F8F63.31413.1811A113@stuart.lexacorp.com.pg> <29f585dd0806110356w342c768ch49dfc998b65f27ce@mail.gmail.com> Message-ID: <29f585dd0806120841g17cf6270y3a6bbff82e9ae05c@mail.gmail.com> Hi Mark, The db in question concerns stock-trading. A couple of hundred users scattered around the world enter "ideas", which are instructions to buy or sell some particular stock. These ideas go into the tblFiction table. Then a complex chunk of code looks at the idea and may or may not act on it. For example, there might be a limit on how much stock a person can buy, either absolutely or within an industry or even of a particular company. Suppose a person's limit on buying apples is 10 and she already owns 6. She submits an idea to buy 3 and the idea goes into the tblFiction table. The process of analyzing the idea begins and while it is still running, she submits another idea to buy 6 more apples. So the first buy must process completely before the second one gets analyzed. Now these "ideas" come in at any time of day, so we have to poll the table in some way to see if there's any unprocessed records to deal with, and if so then deal with them. This may or may not result in an actual order to buy or sell some apples. To answer your question, the only rows that I have to look at in tblFiction are rows entered by the same trader, and depending on various flags on the trader's record, I may have to go a a bit finer-grained (i.e. same trader, same industry, maybe same stock). In most cases, though, it's just "same trader". Anyway, that stuff is all sort of beside the point I was asking, which is what is the best (i.e. least resource-hog) approach to constantly poll the table. On Thu, Jun 12, 2008 at 11:27 AM, Mark A Matte wrote: > > Arthur, > > Just to simplify the question ( for my benifit)... > > "For each new record in tblFiction, you want to analyze EVERY existing > record in tblFiction, do something, and mark that record in tblFiction > complete? > > Thanks, > > Mark A. Matte > > > > Date: Wed, 11 Jun 2008 07:56:26 -0300 > > From: fuller.artful at gmail.com > > To: dba-sqlserver at databaseadvisors.com > > Subject: Re: [dba-SQLServer] Constant Poll: Approaches > > > > Thanks for the input, Stuart. Currently I'm thinking that I should try > what > > I need using SSIS (small footprint, external process), but I'm still open > on > > the subject. > > > > Arthur > > > > On Tue, Jun 10, 2008 at 7:40 PM, Stuart McLachlan > > wrote: > > > >> Triggers are probably out of the question because you need to process > the > >> rows > >> sequentially. > >> > >> Procs with Infinite loops tend to be resource hogs and it's not so easy > to > >> program complex > >> business logic in TSQL. > >> > >> I'd go with an external agent. > >> > >> I've got a couple of similar things running. Both use ODBC for the > >> connection and a front > >> end application or service which sits in the background and periodically > >> polls the data base, > >> applies the logic and updates where required. > >> > >> I generally write this sort of thing in PowerBasic with the SQL Tools > ODBC > >> package but any > >> environment which can generate small footprint executables, can access > >> ODBC, can > >> "sleep" and can spawn threads will do it. > >> > >> > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > _________________________________________________________________ > Instantly invite friends from Facebook and other social networks to join > you on Windows Live? Messenger. > https://www.invite2messenger.net/im/?source=TXT_EML_WLH_InviteFriends > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Thu Jun 12 11:39:53 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 12 Jun 2008 13:39:53 -0300 Subject: [dba-SQLServer] Index Fragmentation Message-ID: <29f585dd0806120939w3ba21e3an5b2ea4e57bdfd484@mail.gmail.com> What is the best practice regarding the Page Fullness and Total Fragmentation for some particular index. On the one I'm looking at now, I have Page Fullness 62.74% and Total Fragmentation 71.67%. I'm not sure how to interpret these numbers. TIA, Arthur From fhtapia at gmail.com Fri Jun 13 18:03:31 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 13 Jun 2008 16:03:31 -0700 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: <484F8F63.31413.1811A113@stuart.lexacorp.com.pg> References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com> <484F8F63.31413.1811A113@stuart.lexacorp.com.pg> Message-ID: I don't see why a trigger would be out of the question.. of course it does depend on how process intensive the loops will be, but within a trigger you have the visibility of the new records and can also get a hold of exiting records. At this point you can choose to update/append, or even ignore certain records. -- Francisco On Tue, Jun 10, 2008 at 3:40 PM, Stuart McLachlan wrote: > Triggers are probably out of the question because you need to process the > rows > sequentially. > > Procs with Infinite loops tend to be resource hogs and it's not so easy to > program complex > business logic in TSQL. > > I'd go with an external agent. > > I've got a couple of similar things running. Both use ODBC for the > connection and a front > end application or service which sits in the background and periodically > polls the data base, > applies the logic and updates where required. > > I generally write this sort of thing in PowerBasic with the SQL Tools ODBC > package but any > environment which can generate small footprint executables, can access > ODBC, can > "sleep" and can spawn threads will do it. > > > > On 10 Jun 2008 at 13:27, Arthur Fuller wrote: > > > I have a pair of tables, call them Fiction and Fact. People around the > world > > enter rows into Fiction. The database has to process these records and > > depending on some logic, it makes a decision whether or not to enter a > > similar row into Fact. The logic is not important to my question. What > I'm > > trying to do is set up a "polling" system so that the engine will examine > > the Fiction table every 10 seconds or so and if there are any new rows, > fire > > the logic that decides whether to create a row in the Fact table, and > then > > timestamp the Fiction row so we know that it's been processed. > > > > One important detail in this operation is that I cannot move to row 2 > before > > completely processing row 1, because the results of row 1 may affect the > > outcome of row 2. > > > > I can think of a couple of approaches -- agent job, trigger, proc with an > > infinite loop. But before I get started coding this, I thought that I'd > > reach out and see if anyone's done something similar and has advice on > which > > approach might be best. > > > > Thanks in advance for any suggestions. > > > > Arthur > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From stuart at lexacorp.com.pg Fri Jun 13 19:14:56 2008 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Sat, 14 Jun 2008 10:14:56 +1000 Subject: [dba-SQLServer] Constant Poll: Approaches In-Reply-To: References: <29f585dd0806100927s168fff34j56f39d0539b543ab@mail.gmail.com>, <484F8F63.31413.1811A113@stuart.lexacorp.com.pg>, Message-ID: <48539A20.20043.27DBEAE1@stuart.lexacorp.com.pg> If several people records are saved at about the same time, several triggers can be running at the same time. In Arthur's scenario - Row 2 could be triggering before Row 1 is finished and timestamped and vice versa. On 13 Jun 2008 at 16:03, Francisco Tapia wrote: > I don't see why a trigger would be out of the question.. of course it does > depend on how process intensive the loops will be, but within a trigger you > have the visibility of the new records and can also get a hold of exiting > records. At this point you can choose to update/append, or even ignore > certain records. > > -- > Francisco > > On Tue, Jun 10, 2008 at 3:40 PM, Stuart McLachlan > wrote: > > > Triggers are probably out of the question because you need to process the > > rows > > sequentially. > > > > Procs with Infinite loops tend to be resource hogs and it's not so easy to > > program complex > > business logic in TSQL. > > > > I'd go with an external agent. > > > > I've got a couple of similar things running. Both use ODBC for the > > connection and a front > > end application or service which sits in the background and periodically > > polls the data base, > > applies the logic and updates where required. > > > > I generally write this sort of thing in PowerBasic with the SQL Tools ODBC > > package but any > > environment which can generate small footprint executables, can access > > ODBC, can > > "sleep" and can spawn threads will do it. > > > > > > > > On 10 Jun 2008 at 13:27, Arthur Fuller wrote: > > > > > I have a pair of tables, call them Fiction and Fact. People around the > > world > > > enter rows into Fiction. The database has to process these records and > > > depending on some logic, it makes a decision whether or not to enter a > > > similar row into Fact. The logic is not important to my question. What > > I'm > > > trying to do is set up a "polling" system so that the engine will examine > > > the Fiction table every 10 seconds or so and if there are any new rows, > > fire > > > the logic that decides whether to create a row in the Fact table, and > > then > > > timestamp the Fiction row so we know that it's been processed. > > > > > > One important detail in this operation is that I cannot move to row 2 > > before > > > completely processing row 1, because the results of row 1 may affect the > > > outcome of row 2. > > > > > > I can think of a couple of approaches -- agent job, trigger, proc with an > > > infinite loop. But before I get started coding this, I thought that I'd > > > reach out and see if anyone's done something similar and has advice on > > which > > > approach might be best. > > > > > > Thanks in advance for any suggestions. > > > > > > Arthur > > > _______________________________________________ > > > dba-SQLServer mailing list > > > dba-SQLServer at databaseadvisors.com > > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > > http://www.databaseadvisors.com > > > > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > -- > -Francisco > http://sqlthis.blogspot.com | Tsql and More... > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From ab-mi at post3.tele.dk Fri Jun 13 19:31:05 2008 From: ab-mi at post3.tele.dk (Asger Blond) Date: Sat, 14 Jun 2008 02:31:05 +0200 Subject: [dba-SQLServer] Index Fragmentation In-Reply-To: <29f585dd0806120939w3ba21e3an5b2ea4e57bdfd484@mail.gmail.com> Message-ID: <000301c8cdb5$eeab2f70$2101a8c0@AB> Hi Arthur Short answer: A Page Fullness of 62.74% is normally OK. A Total Fragmentation of 71.67% is bad, and you should rebuild the index using the statement: ALTER INDEX myindex ON myschema.mytable REBUILD Elaborated answer: Guess you are getting your counters examining the property of an index using SQL Server 2005. This is a convenient way in SQL Server 2005 to get the overall counters for index fragmentations. However, you can get more information using this statement, which is a long-pre-SQL2005: DBCC SHOWCONTIG(myschema.mytable, myindex) The DBCC SHOWCONTIG counter: "Avg. Page Density (full)" maps to "Page Fullness" in the index property of SQL Server 2005. The DBCC SHOWCONTIG counter: "Logical scan fragmentation" maps to "Total Fragmentation" in the index property of SQL Server 2005. A "Page Fullness" or "Page Density (full)" of 62.74% indicates a slack (unused space) of 37.26 on each page. For an OLTP (read-write) database it's beneficial to have some amount of unused space, because it prevents forcing page splits when adding new data. For an OLAP database (read-only database) the Page Fullness should be near to 100%, because more rows will then fit to each page, resulting in less IO when reading the data. A "Total Fragmentation" or "Logical scan fragmentation" of 71.67% indicates that 71.67% of the pages in your index are not physically adjacent to the page marked as the "next page" in the header for the index-page. The Microsoft recommendation is: If "Total Fragmemntation" <= 30% use: ALTER INDEX myindex ON myschema.mytable REORGANIZE If "Total Fragmemntation" > 30% use: ALTER INDEX myindex ON myschema.mytable REBUILD REORGANIZE will move the data within the existing pages, resulting in a higher "Page Fullness" - but it won?t ad new pages. REBUILD will fill the pages as REORGANIZE doe's, but it will also move existing pages and allocate new pages to make the pages contiguous according to the index key. Asger -----Oprindelig meddelelse----- Fra: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Arthur Fuller Sendt: 12. juni 2008 18:40 Til: Discussion concerning MS SQL Server Emne: [dba-SQLServer] Index Fragmentation What is the best practice regarding the Page Fullness and Total Fragmentation for some particular index. On the one I'm looking at now, I have Page Fullness 62.74% and Total Fragmentation 71.67%. I'm not sure how to interpret these numbers. TIA, Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Sat Jun 14 10:20:45 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 14 Jun 2008 12:20:45 -0300 Subject: [dba-SQLServer] Index Fragmentation In-Reply-To: <000301c8cdb5$eeab2f70$2101a8c0@AB> References: <29f585dd0806120939w3ba21e3an5b2ea4e57bdfd484@mail.gmail.com> <000301c8cdb5$eeab2f70$2101a8c0@AB> Message-ID: <29f585dd0806140820n52299ce3qe7e9cd44876d0872@mail.gmail.com> Thanks for the info, Asger. That helps a lot. Arthur On Fri, Jun 13, 2008 at 9:31 PM, Asger Blond wrote: > Hi Arthur > > Short answer: > A Page Fullness of 62.74% is normally OK. > A Total Fragmentation of 71.67% is bad, and you should rebuild the index > using the statement: > ALTER INDEX myindex ON myschema.mytable REBUILD > > Elaborated answer... > > From fuller.artful at gmail.com Mon Jun 16 06:22:36 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 16 Jun 2008 08:22:36 -0300 Subject: [dba-SQLServer] Weird Index stats Message-ID: <29f585dd0806160422x5b56904ai3df9b3fa838ae6f8@mail.gmail.com> I have an issue with a particular table that is heavily trafficked. Two of its seven indexes report serious fragmentation, one at 98% and the other (the PK) at 87%. What is strange to me is that a nightly job runs that does the following on this table: (this includes only the code to touch the indexes in question; the full code does the same thing to all seven indexes) ALTER INDEX [IX_BESTSecurityLog__BSID_BSLID_CID] ON [dbo].[BESTSecurityLog] SET (ALLOW_PAGE_LOCKS = ON) ALTER INDEX [IX_BESTSecurityLog__BSID_BSLID_CID] ON [dbo].[BESTSecurityLog] REORGANIZE WITH ( LOB_COMPACTION = ON ) ALTER INDEX [IX_BESTSecurityLog__BSID_BSLID_CID] ON [dbo].[BESTSecurityLog] SET (ALLOW_PAGE_LOCKS = OFF) ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] SET (ALLOW_PAGE_LOCKS = ON) ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] REORGANIZE WITH ( LOB_COMPACTION = ON ) ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] SET (ALLOW_PAGE_LOCKS = OFF) Identical code runs on the other indexes and their fragmentation levels are in the range of 3% or less. The PK is identity(1,1). Why would it report such a high fragmentation level? I'm not sure how to solve this puzzle. Should I drop and recreate the indexes instead of altering them as in the code above? Any clues are gratefully appreciated, Arthur From ab-mi at post3.tele.dk Mon Jun 16 06:51:52 2008 From: ab-mi at post3.tele.dk (Asger Blond) Date: Mon, 16 Jun 2008 13:51:52 +0200 Subject: [dba-SQLServer] Weird Index stats In-Reply-To: <29f585dd0806160422x5b56904ai3df9b3fa838ae6f8@mail.gmail.com> Message-ID: <000001c8cfa7$76ca8110$2101a8c0@AB> Did you try REBUILD instead of REORGANIZE? Asger -----Oprindelig meddelelse----- Fra: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Arthur Fuller Sendt: 16. juni 2008 13:23 Til: Discussion concerning MS SQL Server Emne: [dba-SQLServer] Weird Index stats I have an issue with a particular table that is heavily trafficked. Two of its seven indexes report serious fragmentation, one at 98% and the other (the PK) at 87%. What is strange to me is that a nightly job runs that does the following on this table: (this includes only the code to touch the indexes in question; the full code does the same thing to all seven indexes) ALTER INDEX [IX_BESTSecurityLog__BSID_BSLID_CID] ON [dbo].[BESTSecurityLog] SET (ALLOW_PAGE_LOCKS = ON) ALTER INDEX [IX_BESTSecurityLog__BSID_BSLID_CID] ON [dbo].[BESTSecurityLog] REORGANIZE WITH ( LOB_COMPACTION = ON ) ALTER INDEX [IX_BESTSecurityLog__BSID_BSLID_CID] ON [dbo].[BESTSecurityLog] SET (ALLOW_PAGE_LOCKS = OFF) ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] SET (ALLOW_PAGE_LOCKS = ON) ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] REORGANIZE WITH ( LOB_COMPACTION = ON ) ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] SET (ALLOW_PAGE_LOCKS = OFF) Identical code runs on the other indexes and their fragmentation levels are in the range of 3% or less. The PK is identity(1,1). Why would it report such a high fragmentation level? I'm not sure how to solve this puzzle. Should I drop and recreate the indexes instead of altering them as in the code above? Any clues are gratefully appreciated, Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Mon Jun 16 06:52:22 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 16 Jun 2008 08:52:22 -0300 Subject: [dba-SQLServer] Weird Index stats In-Reply-To: <000001c8cfa7$76ca8110$2101a8c0@AB> References: <29f585dd0806160422x5b56904ai3df9b3fa838ae6f8@mail.gmail.com> <000001c8cfa7$76ca8110$2101a8c0@AB> Message-ID: <29f585dd0806160452w428133d9j58410ea77ec92674@mail.gmail.com> No I didn't but thanks for the tip. I'll read up on the difference right now and perhaps apply the change this evening. Thanks. Arthur On Mon, Jun 16, 2008 at 8:51 AM, Asger Blond wrote: > Did you try REBUILD instead of REORGANIZE? > > Asger > From fuller.artful at gmail.com Mon Jun 16 12:07:15 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 16 Jun 2008 14:07:15 -0300 Subject: [dba-SQLServer] Can't drop the PK from a table Message-ID: <29f585dd0806161007x4c7c1d41ue0cd321b74827a5a@mail.gmail.com> I have a table into which a process inserts 1300 rows every 10 minutes. Currently there are 34 million rows in the table. There are 7 indexes on the table, two of which are seriously fragmented (98% and 87%). I ran dbcc reindex on the table and it changed the fragmentation not at all. Why would that be the case? Another thing that puzzles me is that one of the fragmented indexes is the PK clustered index and the PK as is int identity. Also, I find that I cannot drop the PK index, which I thought I'd do as a way of rebuilding it from scratch. When I try to drop the index and save the table, the system times out and won't let me do it. Any idea why this might be so? Arthur From ab-mi at post3.tele.dk Mon Jun 16 14:31:59 2008 From: ab-mi at post3.tele.dk (Asger Blond) Date: Mon, 16 Jun 2008 21:31:59 +0200 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <29f585dd0806161007x4c7c1d41ue0cd321b74827a5a@mail.gmail.com> Message-ID: <000001c8cfe7$a55c6530$2101a8c0@AB> If you really want to drop an index associated with a constraint (a PK or Unique Constraint) you have to drop the *constraint* - you can't just drop the index, but dropping the constraint will automatically drop the associated index as well. SQL Server has made this restriction as a precaution against dropping a unique index not knowing that the index is there for a constraint-reason: that's why you have to explicitly drop the constraint, telling SQL Server that you are aware of what you are doing... But any way, I don't think you have to drop the index. The REBUILD option is made just for your case: a heavy fragmented index and an index bound to a constraint. Using the example from your previous posting I would recommend this statement: ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] REBUILD Asger -----Oprindelig meddelelse----- Fra: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Arthur Fuller Sendt: 16. juni 2008 19:07 Til: Discussion concerning MS SQL Server Emne: [dba-SQLServer] Can't drop the PK from a table I have a table into which a process inserts 1300 rows every 10 minutes. Currently there are 34 million rows in the table. There are 7 indexes on the table, two of which are seriously fragmented (98% and 87%). I ran dbcc reindex on the table and it changed the fragmentation not at all. Why would that be the case? Another thing that puzzles me is that one of the fragmented indexes is the PK clustered index and the PK as is int identity. Also, I find that I cannot drop the PK index, which I thought I'd do as a way of rebuilding it from scratch. When I try to drop the index and save the table, the system times out and won't let me do it. Any idea why this might be so? Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Mon Jun 16 14:34:46 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 16 Jun 2008 12:34:46 -0700 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <000001c8cfe7$a55c6530$2101a8c0@AB> References: <29f585dd0806161007x4c7c1d41ue0cd321b74827a5a@mail.gmail.com> <000001c8cfe7$a55c6530$2101a8c0@AB> Message-ID: We have several large databases here these days, and the technique I follows is the ALTER INDEX with the REBUILD option. Although this works most of the time, I do have a few tables that require the re-building of indexes via dropping and re-creating them. On Mon, Jun 16, 2008 at 12:31 PM, Asger Blond wrote: > If you really want to drop an index associated with a constraint (a PK or > Unique Constraint) you have to drop the *constraint* - you can't just drop > the index, but dropping the constraint will automatically drop the > associated index as well. > SQL Server has made this restriction as a precaution against dropping a > unique index not knowing that the index is there for a constraint-reason: > that's why you have to explicitly drop the constraint, telling SQL Server > that you are aware of what you are doing... > > But any way, I don't think you have to drop the index. The REBUILD option > is > made just for your case: a heavy fragmented index and an index bound to a > constraint. Using the example from your previous posting I would recommend > this statement: > > ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] REBUILD > > Asger > > > -----Oprindelig meddelelse----- > Fra: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Arthur > Fuller > Sendt: 16. juni 2008 19:07 > Til: Discussion concerning MS SQL Server > Emne: [dba-SQLServer] Can't drop the PK from a table > > I have a table into which a process inserts 1300 rows every 10 minutes. > Currently there are 34 million rows in the table. There are 7 indexes on > the > table, two of which are seriously fragmented (98% and 87%). I ran dbcc > reindex on the table and it changed the fragmentation not at all. Why would > that be the case? Another thing that puzzles me is that one of the > fragmented indexes is the PK clustered index and the PK as is int identity. > Also, I find that I cannot drop the PK index, which I thought I'd do as a > way of rebuilding it from scratch. When I try to drop the index and save > the > table, the system times out and won't let me do it. Any idea why this might > be so? > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From ab-mi at post3.tele.dk Mon Jun 16 17:31:44 2008 From: ab-mi at post3.tele.dk (Asger Blond) Date: Tue, 17 Jun 2008 00:31:44 +0200 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <000001c8cfe7$a55c6530$2101a8c0@AB> Message-ID: <000001c8d000$c17f9340$2101a8c0@AB> Arthur, sorry I didn't read your scenario close enough: "I have a table into which a process inserts 1300 rows every 10 minutes. Currently there are 34 million rows in the table." When rebuilding an index SQL Server normally locks the key, which will raise concurrency issues for your insert-process. To circumvent this issue you should use the option WITH ONLINE = ON, which will place the index rebuild-values into the tempdb database and apply them as allowed by concurrency. So this would be the statement: ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] REBUILD WITH ONLINE = ON Asger -----Oprindelig meddelelse----- Fra: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Asger Blond Sendt: 16. juni 2008 21:32 Til: 'Discussion concerning MS SQL Server' Emne: Re: [dba-SQLServer] Can't drop the PK from a table If you really want to drop an index associated with a constraint (a PK or Unique Constraint) you have to drop the *constraint* - you can't just drop the index, but dropping the constraint will automatically drop the associated index as well. SQL Server has made this restriction as a precaution against dropping a unique index not knowing that the index is there for a constraint-reason: that's why you have to explicitly drop the constraint, telling SQL Server that you are aware of what you are doing... But any way, I don't think you have to drop the index. The REBUILD option is made just for your case: a heavy fragmented index and an index bound to a constraint. Using the example from your previous posting I would recommend this statement: ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] REBUILD Asger -----Oprindelig meddelelse----- Fra: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Arthur Fuller Sendt: 16. juni 2008 19:07 Til: Discussion concerning MS SQL Server Emne: [dba-SQLServer] Can't drop the PK from a table I have a table into which a process inserts 1300 rows every 10 minutes. Currently there are 34 million rows in the table. There are 7 indexes on the table, two of which are seriously fragmented (98% and 87%). I ran dbcc reindex on the table and it changed the fragmentation not at all. Why would that be the case? Another thing that puzzles me is that one of the fragmented indexes is the PK clustered index and the PK as is int identity. Also, I find that I cannot drop the PK index, which I thought I'd do as a way of rebuilding it from scratch. When I try to drop the index and save the table, the system times out and won't let me do it. Any idea why this might be so? Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Tue Jun 17 06:11:06 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 17 Jun 2008 08:11:06 -0300 Subject: [dba-SQLServer] SQL Samples Files Message-ID: <29f585dd0806170411n27b52e4dodcfae6ee7dfebaeb@mail.gmail.com> Does anyone know where I can download the sample files for SQL 2005? The installation at work doesn't have Adventureworks etc. loaded. I have the disks. Is there a way to install just the samples without reinstalling the whole package? TIA, Arthur From robin.lawrence at merseybeat.co.uk Tue Jun 17 06:20:42 2008 From: robin.lawrence at merseybeat.co.uk (Robin (Merseybeat)) Date: Tue, 17 Jun 2008 12:20:42 +0100 Subject: [dba-SQLServer] SQL Samples Files In-Reply-To: <560E2B80EC8F624B93A87B943B7A9CD5859C4C@rgiserv.rg.local> Message-ID: <560E2B80EC8F624B93A87B943B7A9CD559AF66@rgiserv.rg.local> Hi Arthur, Link here http://codeplex.com/SqlServerSamples HTH Rgds Robin Lawrence -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: 17 June 2008 12:11 To: Discussion concerning MS SQL Server Subject: [dba-SQLServer] SQL Samples Files Does anyone know where I can download the sample files for SQL 2005? The installation at work doesn't have Adventureworks etc. loaded. I have the disks. Is there a way to install just the samples without reinstalling the whole package? TIA, Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Tue Jun 17 06:39:56 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 17 Jun 2008 08:39:56 -0300 Subject: [dba-SQLServer] Bulk Insert Problem Message-ID: <29f585dd0806170439w7db4a795ud28f4514027b7fb@mail.gmail.com> I'm exploring Bulk Insert for the first time, following the code in Paul Neilson's SQL Bible. USE [TestBulkInsert] GO CREATE TABLE [dbo].[AddressStaging]( [ID] [int] NULL, [Address] [varchar](500) NULL, [City] [varchar](500) NULL, [Region] [varchar](500) NULL, [PostalCode] [varchar](500) NULL, [GUID] [varchar](500) NULL, [Updated] [datetime] NULL ) ON [PRIMARY] GO BULK INSERT AddressStaging FROM 'C:\Address.csv' WITH (FIRSTROW = 1, ROWTERMINATOR = '\n') This results in the error MESSAGE: Bulk load: DataFileType was incorrectly specified as char. DataFileType will be assumed to be widechar because the data file has a Unicode signature. Bulk load: DataFileType was incorrectly specified as char. DataFileType will be assumed to be widechar because the data file has a Unicode signature. Msg 7339, Level 16, State 1, Line 1 OLE DB provider 'BULK' for linked server '(null)' returned invalid data for column '[BULK].Updated'. The Address.csv file is the sample file in the Adventureworks directory. Anyone know what I'm doing wrong and how to make it work correctly? Thanks! Arthur From Gustav at cactus.dk Tue Jun 17 06:47:06 2008 From: Gustav at cactus.dk (Gustav Brock) Date: Tue, 17 Jun 2008 13:47:06 +0200 Subject: [dba-SQLServer] Bulk Insert Problem Message-ID: Hi Arthur >From the error message it sounds like varchar is expected to be replaced with nvarchar in the table definition. /gustav >>> fuller.artful at gmail.com 17-06-2008 13:39 >>> I'm exploring Bulk Insert for the first time, following the code in Paul Neilson's SQL Bible. USE [TestBulkInsert] GO CREATE TABLE [dbo].[AddressStaging]( [ID] [int] NULL, [Address] [varchar](500) NULL, [City] [varchar](500) NULL, [Region] [varchar](500) NULL, [PostalCode] [varchar](500) NULL, [GUID] [varchar](500) NULL, [Updated] [datetime] NULL ) ON [PRIMARY] GO BULK INSERT AddressStaging FROM 'C:\Address.csv' WITH (FIRSTROW = 1, ROWTERMINATOR = '\n') This results in the error MESSAGE: Bulk load: DataFileType was incorrectly specified as char. DataFileType will be assumed to be widechar because the data file has a Unicode signature. Bulk load: DataFileType was incorrectly specified as char. DataFileType will be assumed to be widechar because the data file has a Unicode signature. Msg 7339, Level 16, State 1, Line 1 OLE DB provider 'BULK' for linked server '(null)' returned invalid data for column '[BULK].Updated'. The Address.csv file is the sample file in the Adventureworks directory. Anyone know what I'm doing wrong and how to make it work correctly? Thanks! Arthur From fuller.artful at gmail.com Tue Jun 17 06:47:51 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 17 Jun 2008 08:47:51 -0300 Subject: [dba-SQLServer] SQL Samples Files In-Reply-To: <560E2B80EC8F624B93A87B943B7A9CD559AF66@rgiserv.rg.local> References: <560E2B80EC8F624B93A87B943B7A9CD5859C4C@rgiserv.rg.local> <560E2B80EC8F624B93A87B943B7A9CD559AF66@rgiserv.rg.local> Message-ID: <29f585dd0806170447q21c46c02q7710ba0868c84fcb@mail.gmail.com> Thanks, Robin. turns out I already have the samples installed on my notebook. But now I have a new problem concerning Bulk Insert. See the message I just posted a moment ago. Thanks, Arthur On Tue, Jun 17, 2008 at 8:20 AM, Robin (Merseybeat) < robin.lawrence at merseybeat.co.uk> wrote: > Hi Arthur, > Link here > http://codeplex.com/SqlServerSamples > HTH > Rgds > Robin Lawrence > From fuller.artful at gmail.com Tue Jun 17 06:54:59 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 17 Jun 2008 08:54:59 -0300 Subject: [dba-SQLServer] Bulk Insert Problem In-Reply-To: References: Message-ID: <29f585dd0806170454n38a0c39el1524c588d59dc4a2@mail.gmail.com> Good guess, Gustav. It works now. I'm on a slow box and I inserted 19614 rows in 3 seconds. On Tue, Jun 17, 2008 at 8:47 AM, Gustav Brock wrote: > Hi Arthur > > >From the error message it sounds like varchar is expected to be replaced > with nvarchar in the table definition. > > /gustav From fuller.artful at gmail.com Tue Jun 17 11:38:09 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 17 Jun 2008 13:38:09 -0300 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <000001c8d000$c17f9340$2101a8c0@AB> References: <000001c8cfe7$a55c6530$2101a8c0@AB> <000001c8d000$c17f9340$2101a8c0@AB> Message-ID: <29f585dd0806170938x56979f2bh8850f5e4f5d002df@mail.gmail.com> Thanks for the clarification. Arthur On Mon, Jun 16, 2008 at 7:31 PM, Asger Blond wrote: > Arthur, sorry I didn't read your scenario close enough: > "I have a table into which a process inserts 1300 rows every 10 minutes. > Currently there are 34 million rows in the table." > > When rebuilding an index SQL Server normally locks the key, which will > raise > concurrency issues for your insert-process. > > To circumvent this issue you should use the option WITH ONLINE = ON, which > will place the index rebuild-values into the tempdb database and apply them > as allowed by concurrency. > > So this would be the statement: > > ALTER INDEX [PK_BESTSecurityLog] ON [dbo].[BESTSecurityLog] REBUILD WITH > ONLINE = ON > > Asger > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Tue Jun 17 11:41:11 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 17 Jun 2008 13:41:11 -0300 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <000001c8cfe7$a55c6530$2101a8c0@AB> References: <29f585dd0806161007x4c7c1d41ue0cd321b74827a5a@mail.gmail.com> <000001c8cfe7$a55c6530$2101a8c0@AB> Message-ID: <29f585dd0806170941n274088f2u504ceb7ebb4dd2c0@mail.gmail.com> thanks Asger. I'm running it now. I'll respond with the results, in case anyone else is interested in this subject. From ab-mi at post3.tele.dk Wed Jun 18 14:15:57 2008 From: ab-mi at post3.tele.dk (Asger Blond) Date: Wed, 18 Jun 2008 21:15:57 +0200 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <29f585dd0806170941n274088f2u504ceb7ebb4dd2c0@mail.gmail.com> Message-ID: <000001c8d177$bd418400$2101a8c0@AB> Hi Arthur Did it work out? Asger -----Oprindelig meddelelse----- Fra: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Arthur Fuller Sendt: 17. juni 2008 18:41 Til: Discussion concerning MS SQL Server Emne: Re: [dba-SQLServer] Can't drop the PK from a table thanks Asger. I'm running it now. I'll respond with the results, in case anyone else is interested in this subject. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From pcs.accessd at gmail.com Wed Jun 18 19:13:51 2008 From: pcs.accessd at gmail.com (Borge Hansen) Date: Thu, 19 Jun 2008 10:13:51 +1000 Subject: [dba-SQLServer] [dba-sqlserver] transactional SP involving tables from two SQL Db - Web Service Message-ID: Hi, I am in desperate need for some input advice as to the following: Background - simplified: Access 2003 application with SQL2005 Db backend The application is processing requests for short term vacancies (Booking) on behalf of a remote Client, identifying suitable Person to fill the vacancy (Placement). Bookings are placed - via an application on the remote Client's distributed intranet - into a table on Client's SQL2005 Server. Our application is querying this table throughout the day in order to bring new Bookings into our SQL db table We also perform updates on a Status Field Once a Placement has been identified (via text to speech automated phone calls to potential Person filling the vacancy - but that's a different story) we write the data to a new record on the remote Client's Placement table on remote client SQL Server: tblremoteBooking tblremotePlacement on our local SQL Server: tblBooking tblPlacement We have been given access to the tables on the remote Client Server via a userprofile/login that gives us read,insert and update rights to these two tables We access the tblremoteBooking table via ODBC linked table to the Access database window. We then do query join between tblremoteBooking and tblBooking to get all new bookings from the client into our table Other updates and inserts happens via ADODB connection Issues / Challenges: 1) There is currently a push from Client's IT people to revoke the direct table access and give us a web service for interacting with the tables - I don't like that because I know very little about web services. 2) I'd like to be able to perform joins etc between the remote booking table and our local one in a SP on the sql db - rather than doing it in an access query on odbc linked tables in the Access database window. Similarly I'd like to be able to create a transactional SP that involves update / insert to both remote and local tables with rollback so that if one of the update / insert queries fail we can roll back the whole transaction and give the data the remote and local tables in sync. What's required to access tables on a remote SQL Db inside a local SQL Db in a stored procedure ? 3) Another developer has created a web portal for our casual relief staff to go online and have a look at Bookings on the web portal and accept a suitable booking. All is tested and ready to go except for the web app to have acces to the client's table for doing an insert to the placement table and an update to the booking table. Again, here we are inserting and update to both our 'local' tables and the client's table so we want the same stored procedures put in place to ensure data is in sync.... What to do? Assuming that the Client would give us access the the remote tables, can someone give pointers to how to establish connection to these tables from our local SQL Db so we can manipulate these tables in a Stored Procedure. Assuming that we are going with a web service for reading and writing to the remote tables - can we then still access these tables in a SP - or will we have to do this elsewhere (other coding environment) .... and remember we are doing this in VBA ! Any comments / advice appreciated Regards borge From Gustav at cactus.dk Thu Jun 19 03:15:44 2008 From: Gustav at cactus.dk (Gustav Brock) Date: Thu, 19 Jun 2008 10:15:44 +0200 Subject: [dba-SQLServer] transactional SP involving tables from two SQL Db - Web Service Message-ID: Hi Borge You can't do an engine level transaction that spans both SQL Server and JET (your local database). So you will have to write your code like a higher level transaction and catch lost connections and the like during an update. If the tables at the SQL Server are not too big, an ODBC connection and linked tables are fine and, indeed, very simple to handle as you can use normal Access queries. However, if you have an ODBC connection you can also run a pass-through query which - as you probably know - is run on the SQL Server. Very handy and speedy for the update and insert of data on the server. I see no reason why to move further creating stored procedures for the simple tasks you describe but it can be done, of course. Again, you can call these via ODBC. I'm not sure how to interface a Web Service with Access 2003. With .Net it is "piece of cake". I see no advantages in this for you, so unless management decide for this I would stick with ODBC and/or ADODB. /gustav >>> pcs.accessd at gmail.com 19-06-2008 02:13 >>> Hi, I am in desperate need for some input advice as to the following: Background - simplified: Access 2003 application with SQL2005 Db backend The application is processing requests for short term vacancies (Booking) on behalf of a remote Client, identifying suitable Person to fill the vacancy (Placement). Bookings are placed - via an application on the remote Client's distributed intranet - into a table on Client's SQL2005 Server. Our application is querying this table throughout the day in order to bring new Bookings into our SQL db table We also perform updates on a Status Field Once a Placement has been identified (via text to speech automated phone calls to potential Person filling the vacancy - but that's a different story) we write the data to a new record on the remote Client's Placement table on remote client SQL Server: tblremoteBooking tblremotePlacement on our local SQL Server: tblBooking tblPlacement We have been given access to the tables on the remote Client Server via a userprofile/login that gives us read,insert and update rights to these two tables We access the tblremoteBooking table via ODBC linked table to the Access database window. We then do query join between tblremoteBooking and tblBooking to get all new bookings from the client into our table Other updates and inserts happens via ADODB connection Issues / Challenges: 1) There is currently a push from Client's IT people to revoke the direct table access and give us a web service for interacting with the tables - I don't like that because I know very little about web services. 2) I'd like to be able to perform joins etc between the remote booking table and our local one in a SP on the sql db - rather than doing it in an access query on odbc linked tables in the Access database window. Similarly I'd like to be able to create a transactional SP that involves update / insert to both remote and local tables with rollback so that if one of the update / insert queries fail we can roll back the whole transaction and give the data the remote and local tables in sync. What's required to access tables on a remote SQL Db inside a local SQL Db in a stored procedure ? 3) Another developer has created a web portal for our casual relief staff to go online and have a look at Bookings on the web portal and accept a suitable booking. All is tested and ready to go except for the web app to have acces to the client's table for doing an insert to the placement table and an update to the booking table. Again, here we are inserting and update to both our 'local' tables and the client's table so we want the same stored procedures put in place to ensure data is in sync.... What to do? Assuming that the Client would give us access the the remote tables, can someone give pointers to how to establish connection to these tables from our local SQL Db so we can manipulate these tables in a Stored Procedure. Assuming that we are going with a web service for reading and writing to the remote tables - can we then still access these tables in a SP - or will we have to do this elsewhere (other coding environment) .... and remember we are doing this in VBA ! Any comments / advice appreciated Regards borge From fuller.artful at gmail.com Thu Jun 19 04:18:52 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 19 Jun 2008 06:18:52 -0300 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <000001c8d177$bd418400$2101a8c0@AB> References: <29f585dd0806170941n274088f2u504ceb7ebb4dd2c0@mail.gmail.com> <000001c8d177$bd418400$2101a8c0@AB> Message-ID: <29f585dd0806190218i20036d60qa87ec8c4f7798589@mail.gmail.com> I ran out of disk space, which is hilarious. I bought this new notebook less than a month ago and it's got a 100 GB drive, and I'm already out of disk space. Time to look at one of those USB drives. I've never played with one before but I see that they come in sizes up to a TB. My gut feeling is that they have to be a lot slower than a normal hard disk. Can anyone who has one of these respond with even vague comparisons of speed? A. On Wed, Jun 18, 2008 at 4:15 PM, Asger Blond wrote: > Hi Arthur > Did it work out? > > Asger > > From ridermark at gmail.com Thu Jun 19 07:37:34 2008 From: ridermark at gmail.com (Mark Rider) Date: Thu, 19 Jun 2008 07:37:34 -0500 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <29f585dd0806190218i20036d60qa87ec8c4f7798589@mail.gmail.com> References: <29f585dd0806170941n274088f2u504ceb7ebb4dd2c0@mail.gmail.com> <000001c8d177$bd418400$2101a8c0@AB> <29f585dd0806190218i20036d60qa87ec8c4f7798589@mail.gmail.com> Message-ID: I have had good results with the WD MyBook drives. I have a couple of their 1TB drives for the backup and testing databases, and have yet to see any major slowdowns compared to the copy of the production DB on my laptop. -- Mark Rider http://commonsensesecurity.info If you're not part of the solution, you're part of the precipitate. - Henry J. Tillman On Thu, Jun 19, 2008 at 4:18 AM, Arthur Fuller wrote: > I ran out of disk space, which is hilarious. I bought this new notebook less > than a month ago and it's got a 100 GB drive, and I'm already out of disk > space. Time to look at one of those USB drives. I've never played with one > before but I see that they come in sizes up to a TB. My gut feeling is that > they have to be a lot slower than a normal hard disk. > > Can anyone who has one of these respond with even vague comparisons of > speed? > > A. > > On Wed, Jun 18, 2008 at 4:15 PM, Asger Blond wrote: > >> Hi Arthur >> Did it work out? >> >> Asger >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Thu Jun 19 07:51:49 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 19 Jun 2008 09:51:49 -0300 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: References: <29f585dd0806170941n274088f2u504ceb7ebb4dd2c0@mail.gmail.com> <000001c8d177$bd418400$2101a8c0@AB> <29f585dd0806190218i20036d60qa87ec8c4f7798589@mail.gmail.com> Message-ID: <29f585dd0806190551y2ab0bb0s278d0459272528d5@mail.gmail.com> Thanks. I don't know whether my local store carries those. They do have LaCie drives, I saw them on the shelf last visit. Today I'll ask about WD. Arthur On Thu, Jun 19, 2008 at 9:37 AM, Mark Rider wrote: > I have had good results with the WD MyBook drives. I have a couple of > their 1TB drives for the backup and testing databases, and have yet to > see any major slowdowns compared to the copy of the production DB on > my laptop. > From fhtapia at gmail.com Thu Jun 19 08:42:49 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 19 Jun 2008 06:42:49 -0700 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <29f585dd0806190551y2ab0bb0s278d0459272528d5@mail.gmail.com> References: <29f585dd0806170941n274088f2u504ceb7ebb4dd2c0@mail.gmail.com> <000001c8d177$bd418400$2101a8c0@AB> <29f585dd0806190218i20036d60qa87ec8c4f7798589@mail.gmail.com> <29f585dd0806190551y2ab0bb0s278d0459272528d5@mail.gmail.com> Message-ID: as with all things ymmv as I don't trust the wd USB drives as ive had two that over heated ive also used lacie and had good luck with those, but I think the newer stuff should all be decent. One thing I will say that a two drive 1tb is faster than a 1drive tv On 6/19/08, Arthur Fuller wrote: > Thanks. I don't know whether my local store carries those. They do have > LaCie drives, I saw them on the shelf last visit. Today I'll ask about WD. > > Arthur > > On Thu, Jun 19, 2008 at 9:37 AM, Mark Rider wrote: > >> I have had good results with the WD MyBook drives. I have a couple of >> their 1TB drives for the backup and testing databases, and have yet to >> see any major slowdowns compared to the copy of the production DB on >> my laptop. >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From ab-mi at post3.tele.dk Thu Jun 19 10:34:16 2008 From: ab-mi at post3.tele.dk (Asger Blond) Date: Thu, 19 Jun 2008 17:34:16 +0200 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: <29f585dd0806190218i20036d60qa87ec8c4f7798589@mail.gmail.com> Message-ID: <000f01c8d222$03ca4dd0$2101a8c0@AB> I guess it's the tempdb filling your disk. Did you check the size of tempdb to see if she's the culprit? It strikes me that your process inserting 1300 rows every 10 minutes maybe doesn't leave a window wide enough for SQL Server to apply the index rebuild-values from tempdb, which would then grow beyond acceptable size. To get tempdb back to normal size you could restart SQL Server, which will recreate this database. Is it viable for you to temporarily disable the process or expand the interval between the processes? In that case you should be able to run the REBUILD INDEX without the online option, supposing of course no other processes are blocking the index. Asger -----Oprindelig meddelelse----- Fra: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Arthur Fuller Sendt: 19. juni 2008 11:19 Til: Discussion concerning MS SQL Server Emne: Re: [dba-SQLServer] Can't drop the PK from a table I ran out of disk space, which is hilarious. I bought this new notebook less than a month ago and it's got a 100 GB drive, and I'm already out of disk space. Time to look at one of those USB drives. I've never played with one before but I see that they come in sizes up to a TB. My gut feeling is that they have to be a lot slower than a normal hard disk. Can anyone who has one of these respond with even vague comparisons of speed? A. On Wed, Jun 18, 2008 at 4:15 PM, Asger Blond wrote: > Hi Arthur > Did it work out? > > Asger > > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Thu Jun 19 12:14:36 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 19 Jun 2008 10:14:36 -0700 Subject: [dba-SQLServer] [dba-sqlserver] transactional SP involving tables from two SQL Db - Web Service In-Reply-To: References: Message-ID: Borge, I'm a big fan of using webservices when they make sense. You mentioned a Client DB and your DB. I suspect that running ODBC is taxing the network connection thus the requirement to go with webservices. You can call webservices from Access using the soap object but they can be problematic. I will dig up some sample code that I used to use and post later today. You can also find similar code on google. If you prefer the database to make the call I have a procedure that I can share that also made a call to a webservice in order to process data. In our sitaution a webservice was a more likable solution since I could not link to the other database as it was a Universe db, and they did not have the ODBC links setup and was too time consuming instead a webservice turned out to be nicer because I could call and post directly to the system each new order, the order contained an xml of the entire order as typed into the access database. If you succeed in setting up a linked server which would be ideal but you'll need to be careful not to run your queries so open ended, you'll most likely want to ensure that you are not exceeding the Client IT's threshold for data throughput. but each call to the linked server is pretty much described as SELECT * FROM LinkedServer.Database.Owner.TableName For example: SELECT * FROM ClientServer.ClientDB.dbo.tblremoteBooking On Wed, Jun 18, 2008 at 5:13 PM, Borge Hansen wrote: > Hi, > > I am in desperate need for some input advice as to the following: > > Background - simplified: > Access 2003 application with SQL2005 Db backend > The application is processing requests for short term vacancies (Booking) > on > behalf of a remote Client, identifying suitable Person to fill the vacancy > (Placement). > Bookings are placed - via an application on the remote Client's distributed > intranet - into a table on Client's SQL2005 Server. > Our application is querying this table throughout the day in order to bring > new Bookings into our SQL db table > We also perform updates on a Status Field > Once a Placement has been identified (via text to speech automated phone > calls to potential Person filling the vacancy - but that's a different > story) we write the data to a new record on the remote Client's Placement > table > > on remote client SQL Server: > tblremoteBooking > tblremotePlacement > > on our local SQL Server: > tblBooking > tblPlacement > > We have been given access to the tables on the remote Client Server via a > userprofile/login that gives us read,insert and update rights to these two > tables > > We access the tblremoteBooking table via ODBC linked table to the Access > database window. > We then do query join between tblremoteBooking and tblBooking to get all > new > bookings from the client into our table > > Other updates and inserts happens via ADODB connection > > Issues / Challenges: > > 1) > There is currently a push from Client's IT people to revoke the direct > table > access and give us a web service for interacting with the tables - I don't > like that because I know very little about web services. > > > 2) > I'd like to be able to perform joins etc between the remote booking table > and our local one in a SP on the sql db - rather than doing it in an access > query on odbc linked tables in the Access database window. > > Similarly I'd like to be able to create a transactional SP that involves > update / insert to both remote and local tables with rollback so that if > one > of the update / insert queries fail we can roll back the whole transaction > and give the data the remote and local tables in sync. > > What's required to access tables on a remote SQL Db inside a local SQL Db > in a stored procedure ? > > 3) > Another developer has created a web portal for our casual relief staff to > go > online and have a look at Bookings on the web portal and accept a suitable > booking. > All is tested and ready to go except for the web app to have acces to the > client's table for doing an insert to the placement table and an update to > the booking table. > Again, here we are inserting and update to both our 'local' tables and the > client's table so we want the same stored procedures put in place to ensure > data is in sync.... > > > What to do? > > Assuming that the Client would give us access the the remote tables, can > someone give pointers to how to establish connection to these tables from > our local SQL Db so we can manipulate these tables in a Stored Procedure. > > Assuming that we are going with a web service for reading and writing to > the > remote tables - can we then still access these tables in a SP - or will we > have to do this elsewhere (other coding environment) .... and remember we > are doing this in VBA ! > > Any comments / advice appreciated > > Regards > borge > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From robert at webedb.com Thu Jun 19 13:15:24 2008 From: robert at webedb.com (Robert L. Stewart) Date: Thu, 19 Jun 2008 13:15:24 -0500 Subject: [dba-SQLServer] Can't drop the PK from a table In-Reply-To: References: Message-ID: <200806191819.m5JIJhEQ026438@databaseadvisors.com> Arthur, Check out the 320 gb drive at Western Digital. I upgraded my notebook a couple of weeks after I got it. :-) Last I looked, they were about $179 USD each. Also, at Costco, I saw the 1tb external drive from Western Digital for around $150 USD. Robert At 12:00 PM 6/19/2008, you wrote: >Date: Thu, 19 Jun 2008 06:18:52 -0300 >From: "Arthur Fuller" >Subject: Re: [dba-SQLServer] Can't drop the PK from a table >To: "Discussion concerning MS SQL Server" > >Message-ID: > <29f585dd0806190218i20036d60qa87ec8c4f7798589 at mail.gmail.com> >Content-Type: text/plain; charset=ISO-8859-1 > >I ran out of disk space, which is hilarious. I bought this new notebook less >than a month ago and it's got a 100 GB drive, and I'm already out of disk >space. Time to look at one of those USB drives. I've never played with one >before but I see that they come in sizes up to a TB. My gut feeling is that >they have to be a lot slower than a normal hard disk. > >Can anyone who has one of these respond with even vague comparisons of >speed? > >A. From fuller.artful at gmail.com Mon Jun 23 06:30:13 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 23 Jun 2008 08:30:13 -0300 Subject: [dba-SQLServer] Index Fragmentation Message-ID: <29f585dd0806230430qc3c9d70scb30e7694c26b3a@mail.gmail.com> Now I am very puzzled. After noticing a number of seriously fragmented indexes, I ran a job over the weekend, using the following syntax: ALTER INDEX IX_900_PortfolioSummary__PPCID_PortID ON dbo.[900_PortfolioSummary] REBUILD etc. To my surprise, the indexes in question remain just as fragmented as they were before the job. Can anyone advise me why this is so? I thought that a REBUILD completely rebuilt the index with no fragmentation. Thanks, Arthur From fuller.artful at gmail.com Mon Jun 23 14:36:19 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 23 Jun 2008 16:36:19 -0300 Subject: [dba-SQLServer] Scalar UDFs -- NOT! Message-ID: <29f585dd0806231236m3f56478dx2bb33fc1c9c676b7@mail.gmail.com> When scalar UDFs first appeared, I was a big fan. No longer. In my current project, we had a bunch of them and in some sprocs they were called frequently. In one sproc various UDFs were called 57 times (they were emebedded in CASE WHEN blocks and such, mostly, but some were called from WHERE clauses. The first sproc I investigate was taking 28 seconds to execute. I copied it and replaced all the UDFs with hard-coded calls. The execution time went down to 3 seconds. I then tackled another time-consuming sproc (23 seconds) and did the same thing. The results were even more spectacular. Execution time as reported in Query Analyzer was zero seconds. I'm big enough to eat my own words. I've written various articles about how cool scalar UDFs are. I take it all back. Arthur From accessd at shaw.ca Mon Jun 23 16:01:08 2008 From: accessd at shaw.ca (Jim Lawrence) Date: Mon, 23 Jun 2008 14:01:08 -0700 Subject: [dba-SQLServer] Scalar UDFs -- NOT! In-Reply-To: <29f585dd0806231236m3f56478dx2bb33fc1c9c676b7@mail.gmail.com> References: <29f585dd0806231236m3f56478dx2bb33fc1c9c676b7@mail.gmail.com> Message-ID: Hi Arthur: Does the poor performance persist even when the UDF is called a numbers time? It should load into memory and optimize when called a number of times? Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Monday, June 23, 2008 12:36 PM To: Discussion concerning MS SQL Server Subject: [dba-SQLServer] Scalar UDFs -- NOT! When scalar UDFs first appeared, I was a big fan. No longer. In my current project, we had a bunch of them and in some sprocs they were called frequently. In one sproc various UDFs were called 57 times (they were emebedded in CASE WHEN blocks and such, mostly, but some were called from WHERE clauses. The first sproc I investigate was taking 28 seconds to execute. I copied it and replaced all the UDFs with hard-coded calls. The execution time went down to 3 seconds. I then tackled another time-consuming sproc (23 seconds) and did the same thing. The results were even more spectacular. Execution time as reported in Query Analyzer was zero seconds. I'm big enough to eat my own words. I've written various articles about how cool scalar UDFs are. I take it all back. Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Mon Jun 23 16:30:39 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 23 Jun 2008 14:30:39 -0700 Subject: [dba-SQLServer] Index Fragmentation In-Reply-To: <29f585dd0806230430qc3c9d70scb30e7694c26b3a@mail.gmail.com> References: <29f585dd0806230430qc3c9d70scb30e7694c26b3a@mail.gmail.com> Message-ID: Hey Arthur, I'm not ignoring your post, but have gone out to do some re-search in between my normal tasks :). I'll let you know what I find. -- Francisco On Mon, Jun 23, 2008 at 4:30 AM, Arthur Fuller wrote: > Now I am very puzzled. After noticing a number of seriously fragmented > indexes, I ran a job over the weekend, using the following syntax: > > ALTER INDEX IX_900_PortfolioSummary__PPCID_PortID ON > dbo.[900_PortfolioSummary] REBUILD > etc. > > To my surprise, the indexes in question remain just as fragmented as they > were before the job. Can anyone advise me why this is so? I thought that a > REBUILD completely rebuilt the index with no fragmentation. > > Thanks, > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From fuller.artful at gmail.com Tue Jun 24 05:12:48 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 24 Jun 2008 07:12:48 -0300 Subject: [dba-SQLServer] Scalar UDFs -- NOT! In-Reply-To: References: <29f585dd0806231236m3f56478dx2bb33fc1c9c676b7@mail.gmail.com> Message-ID: <29f585dd0806240312y3301ee9btd66414c66d378968@mail.gmail.com> The problem, insofar as my investigations have identified it (serious qualifer there), is that calling a UDF in a WHERE clause causes it to execute for every potential row, emphasis on Potential, i.e. every actual row investigated prior to the WHERE clause's cutting down the qualifying rows. And I think (just guessing) this is where the performance penalty kicks in. So let me revise my previous diatribe and say instead "don't use scalar UDFs in a WHERE clause". The UDFs in question did things such as return the PK corresponding to a description, i.e. BankAccountType('Savings') might return 3, say. The original consultant thought that these UDFs would be good because if anything changed, it would only have to change in one place, and to that extent he is certainly correct. But in practice it turns out that calling a bunch of these (similar) functions costs us dearly. 23 seconds to zero seconds is meaningful, especially given that the sproc in question is called frequently. New topic: Suppose that a web app calls sproc A, passing parm 123, and its execution takes 3 seconds. Suppose that another user logs on and at approximately the same time executes sproc A, passing parm 234. Suppose that another user logs on and at approximately the same time executes sproc A, passing parm 345. What happens to the execution time? Does each additional user executing the same sproc with different parms cause the time to multiply by the number of users? Or perhaps it's better engineered than that and some optimization occurs under the covers. I don't know enough about the belly of the beast to even guess about this. Does somebody on this list? TIA. Arthur On Mon, Jun 23, 2008 at 6:01 PM, Jim Lawrence wrote: > Hi Arthur: > > Does the poor performance persist even when the UDF is called a numbers > time? It should load into memory and optimize when called a number of > times? > > Jim > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur > Fuller > Sent: Monday, June 23, 2008 12:36 PM > To: Discussion concerning MS SQL Server > Subject: [dba-SQLServer] Scalar UDFs -- NOT! > > When scalar UDFs first appeared, I was a big fan. No longer. In my current > project, we had a bunch of them and in some sprocs they were called > frequently. In one sproc various UDFs were called 57 times (they were > emebedded in CASE WHEN blocks and such, mostly, but some were called from > WHERE clauses. The first sproc I investigate was taking 28 seconds to > execute. I copied it and replaced all the UDFs with hard-coded calls. The > execution time went down to 3 seconds. I then tackled another > time-consuming > sproc (23 seconds) and did the same thing. The results were even more > spectacular. Execution time as reported in Query Analyzer was zero seconds. > > I'm big enough to eat my own words. I've written various articles about how > cool scalar UDFs are. I take it all back. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Tue Jun 24 05:14:08 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 24 Jun 2008 07:14:08 -0300 Subject: [dba-SQLServer] Index Fragmentation In-Reply-To: References: <29f585dd0806230430qc3c9d70scb30e7694c26b3a@mail.gmail.com> Message-ID: <29f585dd0806240314r54325fdbh632c4ccc37034021@mail.gmail.com> I've been investigating too. I will be interested to see what our investigations uncover. A. On Mon, Jun 23, 2008 at 6:30 PM, Francisco Tapia wrote: > Hey Arthur, > I'm not ignoring your post, but have gone out to do some re-search in > between my normal tasks :). I'll let you know what I find. > > -- > Francisco > From davidmcafee at gmail.com Tue Jun 24 14:00:32 2008 From: davidmcafee at gmail.com (David McAfee) Date: Tue, 24 Jun 2008 12:00:32 -0700 Subject: [dba-SQLServer] Scalar UDFs -- NOT! In-Reply-To: <29f585dd0806240312y3301ee9btd66414c66d378968@mail.gmail.com> References: <29f585dd0806231236m3f56478dx2bb33fc1c9c676b7@mail.gmail.com> <29f585dd0806240312y3301ee9btd66414c66d378968@mail.gmail.com> Message-ID: <8786a4c00806241200x4f10d76axa321088845ce24c2@mail.gmail.com> Arthur, would it be any faster to insert the result (sans udf) into a temp table (or table variable) then run the udf against the data in temp table instead of every possible record? David On Tue, Jun 24, 2008 at 3:12 AM, Arthur Fuller wrote: > The problem, insofar as my investigations have identified it (serious > qualifer there), is that calling a UDF in a WHERE clause causes it to > execute for every potential row, emphasis on Potential, i.e. every actual > row investigated prior to the WHERE clause's cutting down the qualifying > rows. And I think (just guessing) this is where the performance penalty > kicks in. So let me revise my previous diatribe and say instead "don't use > scalar UDFs in a WHERE clause". > > The UDFs in question did things such as return the PK corresponding to a > description, i.e. BankAccountType('Savings') might return 3, say. The > original consultant thought that these UDFs would be good because if > anything changed, it would only have to change in one place, and to that > extent he is certainly correct. But in practice it turns out that calling a > bunch of these (similar) functions costs us dearly. 23 seconds to zero > seconds is meaningful, especially given that the sproc in question is called > frequently. > > New topic: > Suppose that a web app calls sproc A, passing parm 123, and its execution > takes 3 seconds. > Suppose that another user logs on and at approximately the same time > executes sproc A, passing parm 234. > Suppose that another user logs on and at approximately the same time > executes sproc A, passing parm 345. > > What happens to the execution time? Does each additional user executing the > same sproc with different parms cause the time to multiply by the number of > users? Or perhaps it's better engineered than that and some optimization > occurs under the covers. I don't know enough about the belly of the beast to > even guess about this. Does somebody on this list? > > TIA. > Arthur > > > On Mon, Jun 23, 2008 at 6:01 PM, Jim Lawrence wrote: > >> Hi Arthur: >> >> Does the poor performance persist even when the UDF is called a numbers >> time? It should load into memory and optimize when called a number of >> times? >> >> Jim >> >> -----Original Message----- >> From: dba-sqlserver-bounces at databaseadvisors.com >> [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur >> Fuller >> Sent: Monday, June 23, 2008 12:36 PM >> To: Discussion concerning MS SQL Server >> Subject: [dba-SQLServer] Scalar UDFs -- NOT! >> >> When scalar UDFs first appeared, I was a big fan. No longer. In my current >> project, we had a bunch of them and in some sprocs they were called >> frequently. In one sproc various UDFs were called 57 times (they were >> emebedded in CASE WHEN blocks and such, mostly, but some were called from >> WHERE clauses. The first sproc I investigate was taking 28 seconds to >> execute. I copied it and replaced all the UDFs with hard-coded calls. The >> execution time went down to 3 seconds. I then tackled another >> time-consuming >> sproc (23 seconds) and did the same thing. The results were even more >> spectacular. Execution time as reported in Query Analyzer was zero seconds. >> >> I'm big enough to eat my own words. I've written various articles about how >> cool scalar UDFs are. I take it all back. >> >> Arthur >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From greg at worthey.com Wed Jun 25 21:41:55 2008 From: greg at worthey.com (Greg Worthey) Date: Wed, 25 Jun 2008 19:41:55 -0700 Subject: [dba-SQLServer] dba-SQLServer Digest, Vol 64, Issue 20 In-Reply-To: References: Message-ID: Arthur, I know what you mean. A whole lot of features in sql server (and other technologies) sound great and advanced in theory, but in the widely-unknown innards of the beast, and actual practice, many great features/proper-methods are reduced to big steaming piles. The general idea of sql, as I understood it, is to have a subsystem that knows all about optimizing database access/management, and does all sorts of egghead db optimizations to make the queries/etc super fast. In my experience, on average, the reality of sql seems to be that it creates huge slogging bottlenecks in a whole slew of different unexpected ways. (I could swear JET or Rushmore were far superior!) You can get to become an expert and predict and prevent an ever-growing list of those ways (e.g. don't use UDF's), but when you're all done, you have a system that is forced to be rather ugly and verbose and repetitive where it ought to be (needs to be) elegant and concise. Creates bloat and bugs and unforeseeable problems of all kinds. MySQL doesn't have UDF's, stored procedures, or views. But I don't miss them because those features seem to me so primitive in the full sql implementations that it sort of makes a farce of the idea of a specialized db subsystem. In theory, it's great form to separate data layer and app layers, and so on. But in practice it often just creates whole new fields and pits of gotchas and half-implemented under-baked theoretically complete functionality. It seems to me that more and more we're working against armies of under-designed features-by-committee. Greg ---------------------------------------------------------------------- Message: 1 Date: Tue, 24 Jun 2008 12:00:32 -0700 From: "David McAfee" Subject: Re: [dba-SQLServer] Scalar UDFs -- NOT! To: "Discussion concerning MS SQL Server" Message-ID: <8786a4c00806241200x4f10d76axa321088845ce24c2 at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Arthur, would it be any faster to insert the result (sans udf) into a temp table (or table variable) then run the udf against the data in temp table instead of every possible record? David On Tue, Jun 24, 2008 at 3:12 AM, Arthur Fuller wrote: > The problem, insofar as my investigations have identified it (serious > qualifer there), is that calling a UDF in a WHERE clause causes it to > execute for every potential row, emphasis on Potential, i.e. every actual > row investigated prior to the WHERE clause's cutting down the qualifying > rows. And I think (just guessing) this is where the performance penalty > kicks in. So let me revise my previous diatribe and say instead "don't use > scalar UDFs in a WHERE clause". > > The UDFs in question did things such as return the PK corresponding to a > description, i.e. BankAccountType('Savings') might return 3, say. The > original consultant thought that these UDFs would be good because if > anything changed, it would only have to change in one place, and to that > extent he is certainly correct. But in practice it turns out that calling a > bunch of these (similar) functions costs us dearly. 23 seconds to zero > seconds is meaningful, especially given that the sproc in question is called > frequently. > > New topic: > Suppose that a web app calls sproc A, passing parm 123, and its execution > takes 3 seconds. > Suppose that another user logs on and at approximately the same time > executes sproc A, passing parm 234. > Suppose that another user logs on and at approximately the same time > executes sproc A, passing parm 345. > > What happens to the execution time? Does each additional user executing the > same sproc with different parms cause the time to multiply by the number of > users? Or perhaps it's better engineered than that and some optimization > occurs under the covers. I don't know enough about the belly of the beast to > even guess about this. Does somebody on this list? > > TIA. > Arthur > > > On Mon, Jun 23, 2008 at 6:01 PM, Jim Lawrence wrote: > >> Hi Arthur: >> >> Does the poor performance persist even when the UDF is called a numbers >> time? It should load into memory and optimize when called a number of >> times? >> >> Jim >> >> -----Original Message----- >> From: dba-sqlserver-bounces at databaseadvisors.com >> [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur >> Fuller >> Sent: Monday, June 23, 2008 12:36 PM >> To: Discussion concerning MS SQL Server >> Subject: [dba-SQLServer] Scalar UDFs -- NOT! >> >> When scalar UDFs first appeared, I was a big fan. No longer. In my current >> project, we had a bunch of them and in some sprocs they were called >> frequently. In one sproc various UDFs were called 57 times (they were >> emebedded in CASE WHEN blocks and such, mostly, but some were called from >> WHERE clauses. The first sproc I investigate was taking 28 seconds to >> execute. I copied it and replaced all the UDFs with hard-coded calls. The >> execution time went down to 3 seconds. I then tackled another >> time-consuming >> sproc (23 seconds) and did the same thing. The results were even more >> spectacular. Execution time as reported in Query Analyzer was zero seconds. >> >> I'm big enough to eat my own words. I've written various articles about how >> cool scalar UDFs are. I take it all back. >> >> Arthur >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > ------------------------------ _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver End of dba-SQLServer Digest, Vol 64, Issue 20 ********************************************* From Darryl.Collins at coles.com.au Thu Jun 26 01:59:58 2008 From: Darryl.Collins at coles.com.au (Darryl Collins) Date: Thu, 26 Jun 2008 16:59:58 +1000 Subject: [dba-SQLServer] Linking Tables(?) Message-ID: <49DFE57FB126044B8A8B934E7AEA09ED09FD75E6@WPEXCH05.colesmyer.ad.cmltd.net.au> Hi everyone. This is my first week into a new world (well new for me) of SQL Server 2000 and my first post here. Quick background. This list was recommended by the folks on AccessD (some names here I already recognise). My background is primarily Excel, VBA and more recently MS Access. I now have to learn SQL Server for work so fun days ahead going back to being the new kid in the class - I will apologise now for asking some pretty basic questions from time to time. ok.. Here is the issue I have now. I have Table1 which is made up of Table1keyID (primary key) Table2KeyID (FK) Table3KeyID (FK) Table2 data 1 Alpha 2 beta 3 delta Table3 data 1 Blue 2 Red 3 Green Table1 desired outcome 1 Alpha Blue 2 Alpha Green 3 Alpha Red 4 beta Blue 5 delta Red anyway.. you get the idea. In Access I could make the Table2 and 3 KeyID fields a drop down combo box, set the number of visible columns - that sort of thing. Now in SQL i have created the relationship between Table1 and 2/3, but I am trying to add data into table 1 and I want it restricted only to data from Table2 and 3 respectively - I liked the combo drop down functionality in Access in the table itself. What is the best (or correct) way to do this in a SQL table? I hope that is clear. Regards Darryl. -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Arthur Fuller Sent: Tuesday, 24 June 2008 8:14 PM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Index Fragmentation I've been investigating too. I will be interested to see what our investigations uncover. A. On Mon, Jun 23, 2008 at 6:30 PM, Francisco Tapia wrote: > Hey Arthur, > I'm not ignoring your post, but have gone out to do some re-search in > between my normal tasks :). I'll let you know what I find. > > -- > Francisco > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com This email and any attachments may contain privileged and confidential information and are intended for the named addressee only. If you have received this e-mail in error, please notify the sender and delete this e-mail immediately. Any confidentiality, privilege or copyright is not waived or lost because this e-mail has been sent to you in error. It is your responsibility to check this e-mail and any attachments for viruses. No warranty is made that this material is free from computer virus or any other defect or error. Any loss/damage incurred by using this material is not the sender's responsibility. The sender's entire liability will be limited to resupplying the material. This email and any attachments may contain privileged and confidential information and are intended for the named addressee only. If you have received this e-mail in error, please notify the sender and delete this e-mail immediately. Any confidentiality, privilege or copyright is not waived or lost because this e-mail has been sent to you in error. It is your responsibility to check this e-mail and any attachments for viruses. No warranty is made that this material is free from computer virus or any other defect or error. Any loss/damage incurred by using this material is not the sender's responsibility. The sender's entire liability will be limited to resupplying the material. From Elizabeth.J.Doering at wellsfargo.com Thu Jun 26 07:04:47 2008 From: Elizabeth.J.Doering at wellsfargo.com (Elizabeth.J.Doering at wellsfargo.com) Date: Thu, 26 Jun 2008 07:04:47 -0500 Subject: [dba-SQLServer] Linking Tables(?) References: <49DFE57FB126044B8A8B934E7AEA09ED09FD75E6@WPEXCH05.colesmyer.ad.cmltd.net.au> Message-ID: Welcome, Darryl. You will find that there is nothing so form-like and user-friendly available in SQL Server. If Table1 has got numeric ID fields in it, it just has numeric ID fields in it, and you won't be able to hide this. You can link your tables to your old friend Access and use that as a front-end for nice GUIs, but SQL Server is about storage--relatively large storage--and nothing else. No GUIs to be had. Having come from a Access background myself, this was hard to accept. But with massive help from Books on Line (SQL Server Help) and this list, you will find it will all come together in time. Good luck, and have fun! Liz Liz Doering Systems Engineer Technology Information Group elizabeth.j.doering at wellsfargo.com This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Darryl Collins Sent: Thursday, June 26, 2008 2:00 AM To: Discussion concerning MS SQL Server Subject: [dba-SQLServer] Linking Tables(?) Hi everyone. This is my first week into a new world (well new for me) of SQL Server 2000 and my first post here. Quick background. This list was recommended by the folks on AccessD (some names here I already recognise). My background is primarily Excel, VBA and more recently MS Access. I now have to learn SQL Server for work so fun days ahead going back to being the new kid in the class - I will apologise now for asking some pretty basic questions from time to time. ok.. Here is the issue I have now. I have Table1 which is made up of Table1keyID (primary key) Table2KeyID (FK) Table3KeyID (FK) Table2 data 1 Alpha 2 beta 3 delta Table3 data 1 Blue 2 Red 3 Green Table1 desired outcome 1 Alpha Blue 2 Alpha Green 3 Alpha Red 4 beta Blue 5 delta Red anyway.. you get the idea. In Access I could make the Table2 and 3 KeyID fields a drop down combo box, set the number of visible columns - that sort of thing. Now in SQL i have created the relationship between Table1 and 2/3, but I am trying to add data into table 1 and I want it restricted only to data from Table2 and 3 respectively - I liked the combo drop down functionality in Access in the table itself. What is the best (or correct) way to do this in a SQL table? I hope that is clear. Regards Darryl. From ssharkins at gmail.com Thu Jun 26 07:45:26 2008 From: ssharkins at gmail.com (Susan Harkins) Date: Thu, 26 Jun 2008 08:45:26 -0400 Subject: [dba-SQLServer] Linking Tables(?) References: <49DFE57FB126044B8A8B934E7AEA09ED09FD75E6@WPEXCH05.colesmyer.ad.cmltd.net.au> Message-ID: <005101c8d78a$845b3e10$2f8601c7@SusanOne> FWIW, VB Express is free and easy to use and depending on your needs, might be a better choice for gui than Access. Susan H. > > You will find that there is nothing so form-like and user-friendly > available in SQL Server. If Table1 has got numeric ID fields in it, it > just has numeric ID fields in it, and you won't be able to hide this. > You can link your tables to your old friend Access and use that as a > front-end for nice GUIs, but SQL Server is about storage--relatively > large storage--and nothing else. No GUIs to be had. From Darryl.Collins at coles.com.au Thu Jun 26 18:53:41 2008 From: Darryl.Collins at coles.com.au (Darryl Collins) Date: Fri, 27 Jun 2008 09:53:41 +1000 Subject: [dba-SQLServer] Linking Tables(?) Message-ID: <49DFE57FB126044B8A8B934E7AEA09ED09FD75EE@WPEXCH05.colesmyer.ad.cmltd.net.au> Thanks Susan and Liz, Due to client restrictions (and they are a massive client - no wriggle room on this) it is SQL Server 2000 backend and Access 2000 Front End. I can live with ID numbers only - At least I now not to dig too hard to find a 'solution' when there isn't one. I am sure this will be the first of 1001 questions I have on this. Thanks for your time and patience. regards Darryl. -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Susan Harkins Sent: Thursday, 26 June 2008 10:45 PM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Linking Tables(?) FWIW, VB Express is free and easy to use and depending on your needs, might be a better choice for gui than Access. Susan H. > > You will find that there is nothing so form-like and user-friendly > available in SQL Server. If Table1 has got numeric ID fields in it, it > just has numeric ID fields in it, and you won't be able to hide this. > You can link your tables to your old friend Access and use that as a > front-end for nice GUIs, but SQL Server is about storage--relatively > large storage--and nothing else. No GUIs to be had. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com This email and any attachments may contain privileged and confidential information and are intended for the named addressee only. If you have received this e-mail in error, please notify the sender and delete this e-mail immediately. Any confidentiality, privilege or copyright is not waived or lost because this e-mail has been sent to you in error. It is your responsibility to check this e-mail and any attachments for viruses. No warranty is made that this material is free from computer virus or any other defect or error. Any loss/damage incurred by using this material is not the sender's responsibility. The sender's entire liability will be limited to resupplying the material. This email and any attachments may contain privileged and confidential information and are intended for the named addressee only. If you have received this e-mail in error, please notify the sender and delete this e-mail immediately. Any confidentiality, privilege or copyright is not waived or lost because this e-mail has been sent to you in error. It is your responsibility to check this e-mail and any attachments for viruses. No warranty is made that this material is free from computer virus or any other defect or error. Any loss/damage incurred by using this material is not the sender's responsibility. The sender's entire liability will be limited to resupplying the material. From stuart at lexacorp.com.pg Thu Jun 26 20:17:03 2008 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Fri, 27 Jun 2008 11:17:03 +1000 Subject: [dba-SQLServer] Linking Tables(?) In-Reply-To: <49DFE57FB126044B8A8B934E7AEA09ED09FD75EE@WPEXCH05.colesmyer.ad.cmltd.net.au> Message-ID: <4864CC2F.21179.2915274B@stuart.lexacorp.com.pg> If it's an A2K front end, linked to SQL Server, you can edit the table properties of the linked table in your FE to use a combo box lookup, just as you can with an Access table. On 27 Jun 2008 at 9:53, Darryl Collins wrote: > > Thanks Susan and Liz, > > Due to client restrictions (and they are a massive client - no wriggle room on this) it is SQL Server 2000 backend and Access 2000 Front End. > > I can live with ID numbers only - At least I now not to dig too hard to find a 'solution' when there isn't one. I am sure this will be the first of 1001 > questions I have on this. Thanks for your time and patience. > > regards > Darryl. > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Susan > Harkins > Sent: Thursday, 26 June 2008 10:45 PM > To: Discussion concerning MS SQL Server > Subject: Re: [dba-SQLServer] Linking Tables(?) > > > FWIW, VB Express is free and easy to use and depending on your needs, might > be a better choice for gui than Access. > > Susan H. > > > > > > You will find that there is nothing so form-like and user-friendly > > available in SQL Server. If Table1 has got numeric ID fields in it, it > > just has numeric ID fields in it, and you won't be able to hide this. > > You can link your tables to your old friend Access and use that as a > > front-end for nice GUIs, but SQL Server is about storage--relatively > > large storage--and nothing else. No GUIs to be had. > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > This email and any attachments may contain privileged and confidential information and are intended for the named addressee only. If you have received this e-mail in error, please notify the sender and delete this e-mail immediately. Any confidentiality, privilege or copyright is not waived or lost because this e-mail has been sent to you in error. It is your responsibility to check this e-mail and any attachments for viruses. No warranty is made that this material is free from computer virus or any other defect or error. Any loss/damage incurred by using this material is not the sender's responsibility. The sender's entire liability will be limited to resupplying the material. > > This email and any attachments may contain privileged and confidential information and are intended for the named addressee only. If you have received this e-mail in error, please notify the sender and delete this e-mail immediately. Any confidentiality, privilege or copyright is not waived or lost because this e-mail has been sent to you in error. It is your responsibility to check this e-mail and any attachments for viruses. No warranty is made that this material is free from computer virus or any other defect or error. Any loss/damage incurred by using this material is not the sender's responsibility. The sender's entire liability will be limited to resupplying the material. > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fuller.artful at gmail.com Fri Jun 27 10:06:25 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 27 Jun 2008 12:06:25 -0300 Subject: [dba-SQLServer] Question about Triggers versus Defauits Message-ID: <29f585dd0806270806x377cd0a9q9a9772f77daaa0d2@mail.gmail.com> The db I'm currently working on has dozens of relatively simple triggers which all do pretty much the same thing. Here is an example: CREATE TRIGGER [BESTReport].[trgPersonReport_InsertUpdate] ON [BESTReport].[PersonReport] AFTER UPDATE, INSERT AS BEGIN SET NOCOUNT ON INSERT INTO [Audit].[PersonReport] ( [PersonReport_ID] ,[Person_ID] ,[Report_ID] ,[Active] ,[SystemUser] ,[SessionUser] ,[OriginalUser] ,[ActionDate] ,[Application] ) SELECT [PersonReport_ID] ,[Person_ID] ,[Report_ID] ,[Active] ,SYSTEM_USER ,SESSION_USER ,original_login() ,getdate() ,app_name() FROM INSERTED END My question is this: given that System_User, Session_User, Original_Login(), GetDate() and App_Name() are all available at all times, then why not just declare them as defaults on the columns rather than use a trigger? The example cited above serves both Insert and Update, and obviously in the latter case default columns wouldn't work, but my question remains regarding the Insert. Is there a performance hit due to the trigger which would not be incurred with simple defaulted columns? TIA, Arthur From fhtapia at gmail.com Fri Jun 27 12:26:37 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 27 Jun 2008 10:26:37 -0700 Subject: [dba-SQLServer] Question about Triggers versus Defauits In-Reply-To: <29f585dd0806270806x377cd0a9q9a9772f77daaa0d2@mail.gmail.com> References: <29f585dd0806270806x377cd0a9q9a9772f77daaa0d2@mail.gmail.com> Message-ID: I haven't tested it, but I've always added things like these as default values in columns. I have not noticed any performance impact for default values from built in functions On Fri, Jun 27, 2008 at 8:06 AM, Arthur Fuller wrote: > The db I'm currently working on has dozens of relatively simple triggers > which all do pretty much the same thing. Here is an example: > > > CREATE TRIGGER [BESTReport].[trgPersonReport_InsertUpdate] > ON [BESTReport].[PersonReport] > AFTER UPDATE, INSERT > AS > BEGIN > SET NOCOUNT ON > > INSERT INTO [Audit].[PersonReport] > ( > [PersonReport_ID] > ,[Person_ID] > ,[Report_ID] > ,[Active] > ,[SystemUser] > ,[SessionUser] > ,[OriginalUser] > ,[ActionDate] > ,[Application] > ) > SELECT > [PersonReport_ID] > ,[Person_ID] > ,[Report_ID] > ,[Active] > ,SYSTEM_USER > ,SESSION_USER > ,original_login() > ,getdate() > ,app_name() > FROM INSERTED > END > > > My question is this: given that System_User, Session_User, > Original_Login(), > GetDate() and App_Name() are all available at all times, then why not just > declare them as defaults on the columns rather than use a trigger? The > example cited above serves both Insert and Update, and obviously in the > latter case default columns wouldn't work, but my question remains > regarding > the Insert. Is there a performance hit due to the trigger which would not > be > incurred with simple defaulted columns? > > TIA, > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From fuller.artful at gmail.com Fri Jun 27 12:37:40 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 27 Jun 2008 14:37:40 -0300 Subject: [dba-SQLServer] Question about Triggers versus Defauits In-Reply-To: References: <29f585dd0806270806x377cd0a9q9a9772f77daaa0d2@mail.gmail.com> Message-ID: <29f585dd0806271037j1dae0ad0s45262198b26b067e@mail.gmail.com> Thanks Francisco. I have never noticed any performance penalty on default column values either. But I was wondering about the trigger vs. the default value thing. Since I have a copy of Red Gate's Data Generator, I suppose that I can settle this question objectively: create a pair of tables, one with a trigger and the other with defaults, and do a bunch of inserts into both, and see what happens. A. On Fri, Jun 27, 2008 at 2:26 PM, Francisco Tapia wrote: > I haven't tested it, but I've always added things like these as default > values in columns. I have not noticed any performance impact for default > values from built in functions > > From fuller.artful at gmail.com Fri Jun 27 13:48:42 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 27 Jun 2008 15:48:42 -0300 Subject: [dba-SQLServer] Version Control Integration Message-ID: <29f585dd0806271148i1fddc32bhc08dee67543f4b15@mail.gmail.com> Anyone have suggestions about the best way to integrate version control with SQL 2005? Obvious products include VSS and Subversion. I'm polling for experiences with both, and also asking whether there is some other product I should look at. At my new workplace, we don't have anything installed in terms of version control. This scares me and I want to do something about it asap. TIA, Arthur From word_diva at hotmail.com Fri Jun 27 14:07:06 2008 From: word_diva at hotmail.com (Nancy Lytle) Date: Fri, 27 Jun 2008 14:07:06 -0500 Subject: [dba-SQLServer] Version Control Integration In-Reply-To: <29f585dd0806271148i1fddc32bhc08dee67543f4b15@mail.gmail.com> References: <29f585dd0806271148i1fddc32bhc08dee67543f4b15@mail.gmail.com> Message-ID: Team Foundation server integrates with SQL Server inside of Visual Studio 2005, have you checked it out. There is a lot to it, but a lot it can do for you. Nancy -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Friday, June 27, 2008 1:49 PM To: Discussion concerning MS SQL Server Subject: [dba-SQLServer] Version Control Integration Anyone have suggestions about the best way to integrate version control with SQL 2005? Obvious products include VSS and Subversion. I'm polling for experiences with both, and also asking whether there is some other product I should look at. At my new workplace, we don't have anything installed in terms of version control. This scares me and I want to do something about it asap. TIA, Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com No virus found in this incoming message. Checked by AVG - http://www.avg.com Version: 8.0.131 / Virus Database: 270.4.1/1522 - Release Date: 6/27/2008 8:27 AM No virus found in this outgoing message. Checked by AVG - http://www.avg.com Version: 8.0.131 / Virus Database: 270.4.1/1522 - Release Date: 6/27/2008 8:27 AM From fuller.artful at gmail.com Fri Jun 27 14:10:44 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 27 Jun 2008 16:10:44 -0300 Subject: [dba-SQLServer] Version Control Integration In-Reply-To: References: <29f585dd0806271148i1fddc32bhc08dee67543f4b15@mail.gmail.com> Message-ID: <29f585dd0806271210w2e4f6712maf3f13cb1ebfcb07@mail.gmail.com> I haven't checked it out and we currently don't have a license for it, but I will look into it. Thanks. Arthur On Fri, Jun 27, 2008 at 4:07 PM, Nancy Lytle wrote: > Team Foundation server integrates with SQL Server inside of Visual Studio > 2005, have you checked it out. There is a lot to it, but a lot it can do > for you. > > Nancy > From word_diva at hotmail.com Fri Jun 27 15:37:47 2008 From: word_diva at hotmail.com (Nancy Lytle) Date: Fri, 27 Jun 2008 15:37:47 -0500 Subject: [dba-SQLServer] Version Control Integration In-Reply-To: <29f585dd0806271210w2e4f6712maf3f13cb1ebfcb07@mail.gmail.com> References: <29f585dd0806271148i1fddc32bhc08dee67543f4b15@mail.gmail.com> <29f585dd0806271210w2e4f6712maf3f13cb1ebfcb07@mail.gmail.com> Message-ID: I don't know what your situation is about developers (how many, etc) but TFS lets DBAs and Developers share so much more than just version control. You can add tasks across from Dev to DBA and have it included in their project monitoring, etc. The only problem can be if you use Linked Servers, then setting up the version control can be a bit more of a pain, depending on your environment. Nancy -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Friday, June 27, 2008 2:11 PM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Version Control Integration I haven't checked it out and we currently don't have a license for it, but I will look into it. Thanks. Arthur On Fri, Jun 27, 2008 at 4:07 PM, Nancy Lytle wrote: > Team Foundation server integrates with SQL Server inside of Visual Studio > 2005, have you checked it out. There is a lot to it, but a lot it can do > for you. > > Nancy > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com No virus found in this incoming message. Checked by AVG - http://www.avg.com Version: 8.0.131 / Virus Database: 270.4.1/1522 - Release Date: 6/27/2008 8:27 AM No virus found in this outgoing message. Checked by AVG - http://www.avg.com Version: 8.0.131 / Virus Database: 270.4.1/1522 - Release Date: 6/27/2008 8:27 AM From fuller.artful at gmail.com Sat Jun 28 05:35:07 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 28 Jun 2008 07:35:07 -0300 Subject: [dba-SQLServer] Version Control Integration In-Reply-To: References: <29f585dd0806271148i1fddc32bhc08dee67543f4b15@mail.gmail.com> <29f585dd0806271210w2e4f6712maf3f13cb1ebfcb07@mail.gmail.com> Message-ID: <29f585dd0806280335g77238bbfo6aba9fa8eedbbe8d@mail.gmail.com> Thanks for the insights, Nancy. My situation is approximately this, give or take a detail. I have been brought on to take ownership of the database, which has been previously maintained by one guy who is actually a trader but he sure knows his way around databases. He's made a mistake or two here and there, and some decisions which I might question here and there, but given the 150 tables or so and the attending sprocs, he's done a remarkable job for a guy whose profession is stock-trading not database design. We have one .NET guy who is entirely responsible for the web interface. He writes most everything of consequence in C#. We have the traditional db units in place, to wit Dev, Staging and Production, but there is no concrete and verifiable version control. I persuaded them to license the Red Gate Toolbelt so we are beginning to get on the right track, and can do comparisons between Dev and Staging and Production, for example, but we still don't have version control in place, and this worries me. Ideally, I want to get to a single solution that handles both my changes to the database and the changes to the front end (the .NET part) and can do all the expected things like Diff the versions, roll back to yesterday's or last week's version, and so on. A. On Fri, Jun 27, 2008 at 5:37 PM, Nancy Lytle wrote: > I don't know what your situation is about developers (how many, etc) but > TFS > lets DBAs and Developers share so much more than just version control. You > can add tasks across from Dev to DBA and have it included in their project > monitoring, etc. The only problem can be if you use Linked Servers, then > setting up the version control can be a bit more of a pain, depending on > your environment. > > Nancy > From word_diva at hotmail.com Sat Jun 28 06:49:30 2008 From: word_diva at hotmail.com (Nancy Lytle) Date: Sat, 28 Jun 2008 06:49:30 -0500 Subject: [dba-SQLServer] Version Control Integration In-Reply-To: <29f585dd0806280335g77238bbfo6aba9fa8eedbbe8d@mail.gmail.com> References: <29f585dd0806271148i1fddc32bhc08dee67543f4b15@mail.gmail.com><29f585dd0806271210w2e4f6712maf3f13cb1ebfcb07@mail.gmail.com> <29f585dd0806280335g77238bbfo6aba9fa8eedbbe8d@mail.gmail.com> Message-ID: Red-Gate has another tool called SQL Changeset (actually I believe it is an add-in for SQL Compare) that can be used with VSS or TFS, that might be another piece of the puzzle. Plus using SQL Compare to take snapshots before/after deployments can give way to archive DB schema versions. For your situation TFS might be overkill, but it does integrate so well with VS 2005, you might just check out the licensing options. For now, I'd just set up folders for each environment and put all scripts in these folders with only you have write permissions to the SQL folders (and encourage the programmers to set up their own folder). It's primitive but with only a few people it should work until you decide on a true version control system. Sorry I can't be of much more help, but most places I have worked tend to ignore version control completely, especially when it comes to databases. Nancy -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Saturday, June 28, 2008 5:35 AM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Version Control Integration Thanks for the insights, Nancy. My situation is approximately this, give or take a detail. I have been brought on to take ownership of the database, which has been previously maintained by one guy who is actually a trader but he sure knows his way around databases. He's made a mistake or two here and there, and some decisions which I might question here and there, but given the 150 tables or so and the attending sprocs, he's done a remarkable job for a guy whose profession is stock-trading not database design. We have one .NET guy who is entirely responsible for the web interface. He writes most everything of consequence in C#. We have the traditional db units in place, to wit Dev, Staging and Production, but there is no concrete and verifiable version control. I persuaded them to license the Red Gate Toolbelt so we are beginning to get on the right track, and can do comparisons between Dev and Staging and Production, for example, but we still don't have version control in place, and this worries me. Ideally, I want to get to a single solution that handles both my changes to the database and the changes to the front end (the .NET part) and can do all the expected things like Diff the versions, roll back to yesterday's or last week's version, and so on. A. On Fri, Jun 27, 2008 at 5:37 PM, Nancy Lytle wrote: > I don't know what your situation is about developers (how many, etc) but > TFS > lets DBAs and Developers share so much more than just version control. You > can add tasks across from Dev to DBA and have it included in their project > monitoring, etc. The only problem can be if you use Linked Servers, then > setting up the version control can be a bit more of a pain, depending on > your environment. > > Nancy > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com No virus found in this incoming message. Checked by AVG - http://www.avg.com Version: 8.0.131 / Virus Database: 270.4.1/1522 - Release Date: 6/27/2008 8:27 AM No virus found in this outgoing message. Checked by AVG - http://www.avg.com Version: 8.0.131 / Virus Database: 270.4.2/1523 - Release Date: 6/28/2008 7:00 AM From fuller.artful at gmail.com Sat Jun 28 07:09:52 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 28 Jun 2008 09:09:52 -0300 Subject: [dba-SQLServer] Version Control Integration In-Reply-To: References: <29f585dd0806271148i1fddc32bhc08dee67543f4b15@mail.gmail.com> <29f585dd0806271210w2e4f6712maf3f13cb1ebfcb07@mail.gmail.com> <29f585dd0806280335g77238bbfo6aba9fa8eedbbe8d@mail.gmail.com> Message-ID: <29f585dd0806280509j5e6cb78anb1f7149fda0ba2cf@mail.gmail.com> I have the whole toolbelt but have not yet used ChangeSet. I'll give it a spin right now. Thanks. A. On Sat, Jun 28, 2008 at 8:49 AM, Nancy Lytle wrote: > Red-Gate has another tool called SQL Changeset (actually I believe it is an > add-in for SQL Compare) that can be used with VSS or TFS, that might be > another piece of the puzzle. Plus using SQL Compare to take snapshots > before/after deployments can give way to archive DB schema versions. > From fuller.artful at gmail.com Mon Jun 30 07:23:16 2008 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 30 Jun 2008 09:23:16 -0300 Subject: [dba-SQLServer] Views versus derived tables versus table UDFs Message-ID: <29f585dd0806300523s205c4aebtd874ac36f3ef56ee@mail.gmail.com> Does anyone have any opinions on the performance of views versus derived tables versus table UDFs? I've been looking at some of the code here at my new job and I see some pretty extensive use of derived tables. In the past, I have always used views or table UDFs for such operations, pretty much because it simplified the code, rather than because I did performance checks. TIA, Arthur From fhtapia at gmail.com Mon Jun 30 13:32:21 2008 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 30 Jun 2008 11:32:21 -0700 Subject: [dba-SQLServer] Views versus derived tables versus table UDFs In-Reply-To: <29f585dd0806300523s205c4aebtd874ac36f3ef56ee@mail.gmail.com> References: <29f585dd0806300523s205c4aebtd874ac36f3ef56ee@mail.gmail.com> Message-ID: small UDF tables are fast.. so there is not much of a performance hit since most of the data is loaded to RAM, but if you are dealing with thousands of rows then you will want to re-do those UDFs as sprocs or Views if possible :) imho On Mon, Jun 30, 2008 at 5:23 AM, Arthur Fuller wrote: > Does anyone have any opinions on the performance of views versus derived > tables versus table UDFs? I've been looking at some of the code here at my > new job and I see some pretty extensive use of derived tables. In the past, > I have always used views or table UDFs for such operations, pretty much > because it simplified the code, rather than because I did performance > checks. > > TIA, > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... From ab-mi at post3.tele.dk Mon Jun 30 16:58:11 2008 From: ab-mi at post3.tele.dk (Asger Blond) Date: Mon, 30 Jun 2008 23:58:11 +0200 Subject: [dba-SQLServer] Views versus derived tables versus table UDFs In-Reply-To: Message-ID: <000001c8dafc$63d4bfc0$2101a8c0@AB> Also notice SQL Server's strategy for execution plan caching. Execution plans are saved in the cache whether you use ad hoc queries, views, udfs, or sprocs. But ad hoc queries, views, and udfs will normally outdate sooner from the cache than sprocs will. Only for sprocs SQL Server records the cost of query compilation and if the query in your sproc is reasonably complex, then the execution plan will survive longer in the cache than an equivalent ad hoc query, view, or udf. For that reason I normally would prefer sprocs. But of course plan caching is only one factor affecting performance. Implementing sprocs you still have to decide whether your sproc should query tables, views, udfs, or perhaps make use of the new CTEs (Common Table Expressions). For this I too would appreciate experiences/recommendations/opinions. Asger -----Oprindelig meddelelse----- Fra: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Francisco Tapia Sendt: 30. juni 2008 20:32 Til: Discussion concerning MS SQL Server Emne: Re: [dba-SQLServer] Views versus derived tables versus table UDFs small UDF tables are fast.. so there is not much of a performance hit since most of the data is loaded to RAM, but if you are dealing with thousands of rows then you will want to re-do those UDFs as sprocs or Views if possible :) imho On Mon, Jun 30, 2008 at 5:23 AM, Arthur Fuller wrote: > Does anyone have any opinions on the performance of views versus derived > tables versus table UDFs? I've been looking at some of the code here at my > new job and I see some pretty extensive use of derived tables. In the past, > I have always used views or table UDFs for such operations, pretty much > because it simplified the code, rather than because I did performance > checks. > > TIA, > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://sqlthis.blogspot.com | Tsql and More... _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From robert at webedb.com Mon Jun 30 17:59:25 2008 From: robert at webedb.com (Robert L. Stewart) Date: Mon, 30 Jun 2008 17:59:25 -0500 Subject: [dba-SQLServer] dba-SQLServer Digest, Vol 64, Issue 23 In-Reply-To: References: Message-ID: <200806302302.m5UN2IIt010799@databaseadvisors.com> Arthur, VSS integrates with SQL Management Studio. This makes it easy to use it for version control for SQL. It also integrates with VS 2005, so using it for the front end is also easy. Subversion, as far as I know, does not integrate with either one. There is an integration for Eclipse. At 07:09 AM 6/28/2008, you wrote: >Date: Fri, 27 Jun 2008 15:48:42 -0300 >From: "Arthur Fuller" >Subject: [dba-SQLServer] Version Control Integration >To: "Discussion concerning MS SQL Server" > >Message-ID: > <29f585dd0806271148i1fddc32bhc08dee67543f4b15 at mail.gmail.com> >Content-Type: text/plain; charset=ISO-8859-1 > >Anyone have suggestions about the best way to integrate version control with >SQL 2005? Obvious products include VSS and Subversion. I'm polling for >experiences with both, and also asking whether there is some other product I >should look at. At my new workplace, we don't have anything installed in >terms of version control. This scares me and I want to do something about it >asap. > >TIA, >Arthur