From mwp.reid at qub.ac.uk Wed May 7 08:35:46 2014 From: mwp.reid at qub.ac.uk (Martin Reid) Date: Wed, 7 May 2014 14:35:46 +0100 Subject: [dba-SQLServer] FW: SQL Cluster Message-ID: <631CF83223105545BF43EFB52CB08295A1353B6D07@EX2K7-VIRT-2.ads.qub.ac.uk> See below from one of my colleagues. Anyone have any comments/advise/ideas? Was bounced when I included the screen shots. Cross Posted to DBA Tech list. Martin We still have a large amount of memory being used as "Nonpaged Pool" on sp-sql-d3 and sp-sql-d2, but not sp-sql-d1. I have investigated based on the following blog: http://blogs.msdn.com/b/ntdebugging/archive/2012/08/30/troubleshooting-pool-leaks-part-2-poolmon.aspx It looks like there may be an issue with the Windows Volume Shadow Copy Driver "volsnap.sys". Where to go from here, I'm not sure - but this may be an avenue worth exploring. The good news - SharePoint appears to be performing normally. From mwp.reid at qub.ac.uk Wed May 7 08:33:39 2014 From: mwp.reid at qub.ac.uk (Martin Reid) Date: Wed, 7 May 2014 14:33:39 +0100 Subject: [dba-SQLServer] SQL Cluster Message-ID: <631CF83223105545BF43EFB52CB08295A1353B6D03@EX2K7-VIRT-2.ads.qub.ac.uk> See below from one of my colleagues. Anyone have any comments/advise/ideas? Cross Posted to DBA Tech list. Martin We still have a large amount of memory being used as "Nonpaged Pool" on sp-sql-d3 and sp-sql-d2, but not sp-sql-d1. I have investigated based on the following blog: http://blogs.msdn.com/b/ntdebugging/archive/2012/08/30/troubleshooting-pool-leaks-part-2-poolmon.aspx The 2 screenshots show my findings: [cid:image004.png at 01CF6A01.568B1220] [cid:image005.png at 01CF6A01.568B1220] It looks like there may be an issue with the Windows Volume Shadow Copy Driver "volsnap.sys". Where to go from here, I'm not sure - but this may be an avenue worth exploring. The good news - SharePoint appears to be performing normally. From accessd at shaw.ca Wed May 7 11:16:30 2014 From: accessd at shaw.ca (Jim Lawrence) Date: Wed, 7 May 2014 10:16:30 -0600 (MDT) Subject: [dba-SQLServer] FW: SQL Cluster In-Reply-To: <631CF83223105545BF43EFB52CB08295A1353B6D07@EX2K7-VIRT-2.ads.qub.ac.uk> Message-ID: <983672002.41440768.1399479390598.JavaMail.root@cds018> I do know whether this is part of the issue but I have always been advised to turn off volume shadow copy, on active data drives, as it can be very resource intensive and runs independently of the data server...even when the server is under load. Oracle, manages its own roll backup so any processes that may affect active data management and resources are not allowed to interfere. Perfmon is a great system monitor as it runs at a low level, takes very little resources and is always running. Many of the DBA guys, I have known, swear by the tool. Could have used poolmon a few years ago. (Its a good article.) Jim ----- Original Message ----- From: "Martin Reid" To: "Discussion of Hardware and Software issues (dba-tech at databaseadvisors.com)" , "Discussion concerning MS SQL Server (dba-sqlserver at databaseadvisors.com)" Sent: Wednesday, May 7, 2014 6:35:46 AM Subject: [dba-SQLServer] FW: SQL Cluster See below from one of my colleagues. Anyone have any comments/advise/ideas? Was bounced when I included the screen shots. Cross Posted to DBA Tech list. Martin We still have a large amount of memory being used as "Nonpaged Pool" on sp-sql-d3 and sp-sql-d2, but not sp-sql-d1. I have investigated based on the following blog: http://blogs.msdn.com/b/ntdebugging/archive/2012/08/30/troubleshooting-pool-leaks-part-2-poolmon.aspx It looks like there may be an issue with the Windows Volume Shadow Copy Driver "volsnap.sys". Where to go from here, I'm not sure - but this may be an avenue worth exploring. The good news - SharePoint appears to be performing normally. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Wed May 7 11:19:22 2014 From: accessd at shaw.ca (Jim Lawrence) Date: Wed, 7 May 2014 10:19:22 -0600 (MDT) Subject: [dba-SQLServer] SQL Cluster In-Reply-To: <631CF83223105545BF43EFB52CB08295A1353B6D03@EX2K7-VIRT-2.ads.qub.ac.uk> Message-ID: <2135093226.41444249.1399479562983.JavaMail.root@cds018> Hi Martin: The screen shots are not linked. Are they important to the message? Jim ----- Original Message ----- From: "Martin Reid" To: "Discussion concerning MS SQL Server (dba-sqlserver at databaseadvisors.com)" , "Discussion of Hardware and Software issues (dba-tech at databaseadvisors.com)" Sent: Wednesday, May 7, 2014 6:33:39 AM Subject: [dba-SQLServer] SQL Cluster See below from one of my colleagues. Anyone have any comments/advise/ideas? Cross Posted to DBA Tech list. Martin We still have a large amount of memory being used as "Nonpaged Pool" on sp-sql-d3 and sp-sql-d2, but not sp-sql-d1. I have investigated based on the following blog: http://blogs.msdn.com/b/ntdebugging/archive/2012/08/30/troubleshooting-pool-leaks-part-2-poolmon.aspx The 2 screenshots show my findings: [cid:image004.png at 01CF6A01.568B1220] [cid:image005.png at 01CF6A01.568B1220] It looks like there may be an issue with the Windows Volume Shadow Copy Driver "volsnap.sys". Where to go from here, I'm not sure - but this may be an avenue worth exploring. The good news - SharePoint appears to be performing normally. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From mwp.reid at qub.ac.uk Wed May 7 14:14:42 2014 From: mwp.reid at qub.ac.uk (Martin Reid) Date: Wed, 7 May 2014 20:14:42 +0100 Subject: [dba-SQLServer] SQL Cluster Message-ID: <631CF83223105545BF43EFB52CB08295A1355037D4@EX2K7-VIRT-2.ads.qub.ac.uk> Jim Just shots of monitoring martin Sent from Windows Mail From: Jim Lawrence Sent: ?Wednesday?, ?7? ?May? ?2014 ?17?:?19 To: Discussion concerning MS SQL Server Hi Martin: The screen shots are not linked. Are they important to the message? Jim ----- Original Message ----- From: "Martin Reid" To: "Discussion concerning MS SQL Server (dba-sqlserver at databaseadvisors.com)" , "Discussion of Hardware and Software issues (dba-tech at databaseadvisors.com)" Sent: Wednesday, May 7, 2014 6:33:39 AM Subject: [dba-SQLServer] SQL Cluster See below from one of my colleagues. Anyone have any comments/advise/ideas? Cross Posted to DBA Tech list. Martin We still have a large amount of memory being used as "Nonpaged Pool" on sp-sql-d3 and sp-sql-d2, but not sp-sql-d1. I have investigated based on the following blog: http://blogs.msdn.com/b/ntdebugging/archive/2012/08/30/troubleshooting-pool-leaks-part-2-poolmon.aspx The 2 screenshots show my findings: [cid:image004.png at 01CF6A01.568B1220] [cid:image005.png at 01CF6A01.568B1220] It looks like there may be an issue with the Windows Volume Shadow Copy Driver "volsnap.sys". Where to go from here, I'm not sure - but this may be an avenue worth exploring. The good news - SharePoint appears to be performing normally. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From davidmcafee at gmail.com Thu May 8 11:10:32 2014 From: davidmcafee at gmail.com (David McAfee) Date: Thu, 8 May 2014 09:10:32 -0700 Subject: [dba-SQLServer] Any way to help speed things up? Message-ID: Hi all, I've been busy working at my new job with SQL related tasks. My current project is to import a huge (120 column) table from 710 access databases spread around 14 network locations into SQL server. I've created an SSIS package that loops through a recordset of connection strings and database paths, importing new and changed records into a master load table. The SSIS package normalizes the data a bit into and places the data in their respective locations, then truncates the load table and continues the loop. I've altered my Database and set the Recovery to Simple. I run the package in batches of 10-20 mdb imports just to check things out. I shrink the log file after every batch (which grows pretty big). Is there any tips that can save me some time and possibly space? Thanks in advance, David From ab-mi at post3.tele.dk Thu May 8 19:59:29 2014 From: ab-mi at post3.tele.dk (Asger Blond) Date: Fri, 9 May 2014 02:59:29 +0200 Subject: [dba-SQLServer] Any way to help speed things up? References: Message-ID: Hi David, I would suspend the shrink log to the end of all of your batches. Your database having recovery set to SIMPLE, don't worry bloating the log file: the log file will truncate after each batch (truncate meaning freeing space inside the physical file, but not reducing size of the physical file). In your scenario each batch will force the log file to a physical reduction, and then the next batch will demand a physically expansion. This causes big io's and big file fragmentation: just don't. / Asger ----- Original meddelelse ----- > Fra: David McAfee > Til: Discussion concerning MS SQL Server > > Dato: Tor, 08. maj 2014 18:10 > Emne: [dba-SQLServer] Any way to help speed things up? > > Hi all, I've been busy working at my new job with SQL related tasks. > > My current project is to import a huge (120 column) table from 710 > access databases spread around 14 network locations into SQL server. > > I've created an SSIS package that loops through a recordset of > connection > strings and database paths, importing new and changed records into a > master > load table. > > The SSIS package normalizes the data a bit into and places the data > in > their respective locations, then truncates the load table and > continues the > loop. > > > I've altered my Database and set the Recovery to Simple. > > I run the package in batches of 10-20 mdb imports just to check > things out. > > I shrink the log file after every batch (which grows pretty big). > > Is there any tips that can save me some time and possibly space? > > Thanks in advance, > David > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com From fhtapia at gmail.com Thu May 8 21:31:18 2014 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 8 May 2014 19:31:18 -0700 Subject: [dba-SQLServer] Any way to help speed things up? In-Reply-To: References: Message-ID: <81744F14-DB23-4CF1-B455-E921E2180A95@gmail.com> I agree you really don't want to be shrinking the file size of the log before you're done with your processing. More over if your server is mission critical I wouldn't go to the effort of moving its recovery model to simple. I would set it to bulk logged before the import then use ssis to perform bulk inserts and that should help boost the import performance. Here is and example on how to accomplish the bulk insert. https://www.simple-talk.com/sql/reporting-services/using-sql-server-integration-services-to-bulk-load-data/ Sent from my iPhone > On May 8, 2014, at 5:59 PM, Asger Blond wrote: > > Hi David, > I would suspend the shrink log to the end of all of your batches. > Your database having recovery set to SIMPLE, don't worry bloating the log > file: the log file will truncate after each batch (truncate meaning > freeing space inside the physical file, but not reducing size of the > physical file). > In your scenario each batch will force the log file to a physical > reduction, and then the next batch will demand a physically expansion. > This causes big io's and big file fragmentation: just don't. > / Asger > > > ----- Original meddelelse ----- > >> Fra: David McAfee >> Til: Discussion concerning MS SQL Server >> >> Dato: Tor, 08. maj 2014 18:10 >> Emne: [dba-SQLServer] Any way to help speed things up? >> >> Hi all, I've been busy working at my new job with SQL related tasks. >> >> My current project is to import a huge (120 column) table from 710 >> access databases spread around 14 network locations into SQL server. >> >> I've created an SSIS package that loops through a recordset of >> connection >> strings and database paths, importing new and changed records into a >> master >> load table. >> >> The SSIS package normalizes the data a bit into and places the data >> in >> their respective locations, then truncates the load table and >> continues the >> loop. >> >> >> I've altered my Database and set the Recovery to Simple. >> >> I run the package in batches of 10-20 mdb imports just to check >> things out. >> >> I shrink the log file after every batch (which grows pretty big). >> >> Is there any tips that can save me some time and possibly space? >> >> Thanks in advance, >> David >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From davidmcafee at gmail.com Mon May 12 12:29:33 2014 From: davidmcafee at gmail.com (David McAfee) Date: Mon, 12 May 2014 10:29:33 -0700 Subject: [dba-SQLServer] Any way to help speed things up? In-Reply-To: References: <81744F14-DB23-4CF1-B455-E921E2180A95@gmail.com> Message-ID: I thought by choosing the "Fast Load" Data access mode option, that I was using the OpenRowSet method of loading data. I tried using the insert (and variations of it) in the links that you provided, but I couldn't get them to work. I'm thinking there are too many issues between all of dynamic connection strings, databases, variables and missing/included columns. This isn't (yet) a mission critical component. It's more of a proof of concept. A little more of a back story: Each week a coworker downloads a couple of MDBs from LA County's FTP site. They run some existing action queries and create a couple of "clean" MDBs that other people use to query. The trouble is, the data is spread among many different databases. I decided to bring in the one "main" (huge) table into a test SQL database. I let it run all weekend, and have brought in all of the records from 3/2006 through the first week of 1/2014. Currently, if I use sp_spaceused 'TableNameHere' I get the following: 26,627,542 rows 51,946,048 KB reserved 24,125,192 KB data (24GB for one F'n table...woah!) 27,812,840 KB index_size 8016 KB unused This is definitely the biggest table that I've ever worked with! :P I only bring in New or changed records from the load table. I wrote a view that gives a flattened identical look and feel of the large table, but only showing the latest status for each "claim". That view takes 9:48 to run and returns 3,029,983 rows. I'm thinking that I should make a table from this view and have people query against this table, rather than the large table. What do you thing? Am I going about this the wrong way? There are no daily inserts into this table, just a batch once a week. Many queries against it though... It almost sounds like a candidate for NOSQL :P I think I may be giving John Colby a run for his money when it comes to table sizes. :) Thanks again. David On Fri, May 9, 2014 at 1:26 PM, Francisco Tapia wrote: > David, > based on your data load requirement a csv may become overwhelming... I > think instead I would perform a OpenRowSet in step 2.c.iii > > openrowset example: > http://technet.microsoft.com/en-us/library/ms190312.aspx? > > bulk insert example: > http://technet.microsoft.com/en-us/library/ms174335.aspx > > > so step a.i switch recovery model to bulk-logged this will keep your > database in an almost full recovery model skipping the bulk logged > operations. next following the bulk insert example for Jet databases: > > (from bulk insert link above) > > INSERT INTO FastLoadTable (Column1, Column2) > > > SELECT Column1, Column2 > FROM OPENROWSET(BULK 'Microsoft.Jet.OLEDB.4.0', > 'C:\Program Files\Microsoft Office\OFFICE11\SAMPLES\Northwind.mdb'; > 'admin';'',Customers); > > > performing loads in this manner will minimize the growth of the log file > when bulk inserts are ran, so you wouldn't be logging the hundreds of > records from each of the 700 access db's. > > then at step 3.i switch the recovery model back to full. > > btw, if your business case does not call for full or bulk-logged recovery > you can just switch and leave it at simple. remember you want full > recovery if you have a job actively backing up the log file regularly so > that you can recovery up to the minute before failure in case of a > catastrophe. > > On Fri, May 9, 2014 at 10:28 AM, David McAfee wrote: > >> Currently I am doing the following: >> >> 1. Execute SQL Task - Get list of Database Connection Strings and initial >> variables (I'm currently limiting to 10-20 database pulls at a time) >> >> 2. ForEach Loop: (assign loop variables) >> a. Truncate Load Table >> b. Script Task (Depending on variables, I create different connection >> strings and select statements to deal with missing columns) >> c. Data Flow (Move data from Access to SQL) >> i. OLE DB Source - Dynamic Connection, SQL Command from Variable >> ii. Derived Column - Create two derived columns: MDBID (PKID to >> path and MDB or source DB), SourceDBDate (Date the Database was considered >> clean/correct) >> iii. OLE DB Destination - SQL fast load into "Load table" >> d. Execute SQL Task - Exec Sproc that flags new records in load table >> e. - k. Execute SQL Task - Exec Sprocs that normalizes the data and >> updates FKID in load table >> l. Execute SQL Task - Exec Sproc that flags load table records that >> are changed from the max equivalent record in the real table >> m. Execute SQL Task - Exec Sproc that appends new and changed records >> into the real table >> 3. Truncate Log File >> >> Are you saying that I should replace step 2 (i, ii, iii) with something >> that selects the data from the 700+ various Access databases and save them >> into a text/csv file then bulk insert the text file into the OLE >> Destination Load table then manipulate the data? >> >> > > From fuller.artful at gmail.com Fri May 16 03:39:16 2014 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 16 May 2014 04:39:16 -0400 Subject: [dba-SQLServer] Stored Procedure Generator for SQL Server Message-ID: I came across this article/utility on CodeProject. It generates SPs for Insert and Update for any and all tables in a given database. It's got some options such as SP prefix and others. Needless to say, it's free. Here's the URL: http://www.codeproject.com/Tips/771755/Stored-Procedure-Generator-For-SQL-Server -- Arthur