From davidmcafee at gmail.com Fri May 15 13:22:11 2015 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 May 2015 11:22:11 -0700 Subject: [dba-SQLServer] Looping through queries over and over Message-ID: So I have to come up with a report that shows an ongoing loss of insurance. I would really like to do this on the fly. Each week, we process a batch of records and those are stored with their batch name. Sample batch names: Month Begin 20150105 Month Week2 20150112 Month Week3 20150120 Month End 20150126 Month Begin 20150203 Month Week2 20150209 Month Week3 20150216 Month End 20150223 Starting in Week2, I compare Week 2 to Week 1 (Month Begin) where Week1.Insuranceflag =1 and Week2.Insuranceflag =0. Easy. On week 3, I compare week 3 vs wk 2 loss (new loss) plus any losses from the previous week that are still 0. I wrote a stored procedure where I pass it the batch names: EXEC stpLossOfMediCalDetail 'Month Begin 20150105','Month Week2 20150112' I can insert those results into a table variable, but I'm thinking I need two temp tables. One to hold the cumulative (new loss) records and another to temporarily hold the results comparing week 2 to week 1 then left join those back to the cumulative table and delete any records that don't exist in both tables. Then do that over and over. I don't know how my little text grid will display after being emailed: Month1 Month2 W1 W2 W3 W4 W5 W6 W7 W8 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 Am I over complicating this? :) I hope I don't scare any Non SQL folk off, I could do this with VBA & Queries instead of Cursors and TSQL. I'm just trying to think of a good way to do this. From davidmcafee at gmail.com Fri May 15 15:01:09 2015 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 May 2015 13:01:09 -0700 Subject: [dba-SQLServer] Looping through queries over and over In-Reply-To: References: Message-ID: Ok, I was able to do this in SSMS with hardcoded values, but I'd like to make it dynamic. Maybe a better question would be: Given a select query or temp table of these batchnames: Month Begin 20141202 Month Begin 20150106 Month Begin 20150203 Month Begin 20150302 Month Begin 20150403 Month Begin 20150501 How can I dynamically come up with a loop or fill a temp table to look like this: Step tableABatch1 tableABatch2 tableTBatch1 tableTBatch2 1 Month Begin 20141202 Month Begin 20150106 Month Begin 20141202 Month Begin 20150203 2 NULL NULL Month Begin 20141202 Month Begin 20150302 3 NULL NULL Month Begin 20141202 Month Begin 20150403 4 NULL NULL Month Begin 20141202 Month Begin 20150501 5 Month Begin 20150106 Month Begin 20150203 Month Begin 20150106 Month Begin 20150302 6 NULL NULL Month Begin 20150106 Month Begin 20150403 7 NULL NULL Month Begin 20150106 Month Begin 20150501 8 Month Begin 20150203 Month Begin 20150302 Month Begin 20150203 Month Begin 20150403 9 NULL NULL Month Begin 20150203 Month Begin 20150501 10 Month Begin 20150302 Month Begin 20150403 Month Begin 20150302 Month Begin 20150501 11 Month Begin 20150403 Month Begin 20150501 NULL NULL The reason that I ask is because this is what I ended up doing (manually): for steps 1-4, above The big delete statements below are identical DECLARE @TempTableA TABLE(fields....) --Cumulative table DECLARE @TempTableT TABLE(fields....) --temporary temp table :) INSERT INTO @TempTableA EXEC stpLossOfMediCalDetail 'Month Begin 20141202','Month Begin 20150106' --Step 1 above INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail 'Month Begin 20141202', 'Month Begin 20150203' --Step 1 above DELETE @TempTableA FROM @TempTableA B LEFT JOIN (SELECT A.* FROM @TempTableA A INNER JOIN @TempTableT T ON A.EligNumber =T.EligNumber AND A.AccountNumb = T.AccountNumb AND A.FirstDate = T.FirstDate) C ON B.EligNumber =C.EligNumber AND B.AccountNumb = C.AccountNumb AND B.FirstDate = C.FirstDate WHERE C.AccountNumb IS NULL AND C.FirstDate IS NULL AND C.EligNumber IS NULL DELETE FROM @TempTableT INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail 'Month Begin 20141202', 'Month Begin 20150302' --Step2 DELETE @TempTableA FROM @TempTableA B LEFT JOIN (SELECT A.* FROM @TempTableA A INNER JOIN @TempTableT T ON A.EligNumber =T.EligNumber AND A.AccountNumb = T.AccountNumb AND A.FirstDate = T.FirstDate) C ON B.EligNumber =C.EligNumber AND B.AccountNumb = C.AccountNumb AND B.FirstDate = C.FirstDate WHERE C.AccountNumb IS NULL AND C.FirstDate IS NULL AND C.EligNumber IS NULL DELETE FROM @TempTableT INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail 'Month Begin 20141202', 'Month Begin 20150403' --step3 DELETE @TempTableA FROM @TempTableA B LEFT JOIN (SELECT A.* FROM @TempTableA A INNER JOIN @TempTableT T ON A.EligNumber =T.EligNumber AND A.AccountNumb = T.AccountNumb AND A.FirstDate = T.FirstDate) C ON B.EligNumber =C.EligNumber AND B.AccountNumb = C.AccountNumb AND B.FirstDate = C.FirstDate WHERE C.AccountNumb IS NULL AND C.FirstDate IS NULL AND C.EligNumber IS NULL DELETE FROM @TempTableT INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail 'Month Begin 20141202', 'Month Begin 20150501' --step4 DELETE @TempTableA FROM @TempTableA B LEFT JOIN (SELECT A.* FROM @TempTableA A INNER JOIN @TempTableT T ON A.EligNumber =T.EligNumber AND A.AccountNumb = T.AccountNumb AND A.FirstDate = T.FirstDate) C ON B.EligNumber =C.EligNumber AND B.AccountNumb = C.AccountNumb AND B.FirstDate = C.FirstDate WHERE C.AccountNumb IS NULL AND C.FirstDate IS NULL AND C.EligNumber IS NULL SET NOCOUNT OFF SELECT * FROM @TempTableA --Final Select From fhtapia at gmail.com Fri May 15 17:48:56 2015 From: fhtapia at gmail.com (fhtapia at gmail.com) Date: Fri, 15 May 2015 22:48:56 +0000 Subject: [dba-SQLServer] Looping through queries over and over In-Reply-To: References: Message-ID: Hey Dave, it sounds like what you are fishing for a sql server analysis cube. since you are loading data each week by batch, that data should be processed against your analysis cube so you can run the type of reports you wish to build. not only that you can easily convert from day > week > quarter and finally annually. this then provides the business with a quick snapshot of what the patterns are. if you haven't gotten into building cubes, you can start with this walk through: http://www.codeproject.com/Articles/658912/Create-First-OLAP-Cube-in-SQL-Server-Analysis-Serv or http://www.sqlshack.com/sql-server-business-intelligence-features-olap-cube-creating/ regards, Francisco On Fri, May 15, 2015 at 1:02 PM David McAfee wrote: > Ok, I was able to do this in SSMS with hardcoded values, but I'd like to > make it dynamic. > > Maybe a better question would be: > > Given a select query or temp table of these batchnames: > Month Begin 20141202 > Month Begin 20150106 > Month Begin 20150203 > Month Begin 20150302 > Month Begin 20150403 > Month Begin 20150501 > > How can I dynamically come up with a loop or fill a temp table to look like > this: > Step tableABatch1 tableABatch2 tableTBatch1 tableTBatch2 > 1 Month Begin 20141202 Month Begin 20150106 Month Begin 20141202 Month > Begin 20150203 > 2 NULL NULL Month Begin 20141202 Month Begin 20150302 > 3 NULL NULL Month Begin 20141202 Month Begin 20150403 > 4 NULL NULL Month Begin 20141202 Month Begin 20150501 > 5 Month Begin 20150106 Month Begin 20150203 Month Begin 20150106 Month > Begin 20150302 > 6 NULL NULL Month Begin 20150106 Month Begin 20150403 > 7 NULL NULL Month Begin 20150106 Month Begin 20150501 > 8 Month Begin 20150203 Month Begin 20150302 Month Begin 20150203 Month > Begin 20150403 > 9 NULL NULL Month Begin 20150203 Month Begin 20150501 > 10 Month Begin 20150302 Month Begin 20150403 Month Begin 20150302 Month > Begin 20150501 > 11 Month Begin 20150403 Month Begin 20150501 NULL NULL > > > The reason that I ask is because this is what I ended up doing (manually): > for steps 1-4, above > The big delete statements below are identical > > DECLARE @TempTableA TABLE(fields....) --Cumulative table > DECLARE @TempTableT TABLE(fields....) --temporary temp table :) > > INSERT INTO @TempTableA EXEC stpLossOfMediCalDetail 'Month Begin > 20141202','Month Begin 20150106' --Step 1 above > INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail 'Month Begin 20141202', > 'Month Begin 20150203' --Step 1 above > > DELETE @TempTableA > FROM @TempTableA B > LEFT JOIN > (SELECT A.* > FROM @TempTableA A > INNER JOIN @TempTableT T > ON A.EligNumber =T.EligNumber > AND A.AccountNumb = T.AccountNumb > AND A.FirstDate = T.FirstDate) C > ON B.EligNumber =C.EligNumber > AND B.AccountNumb = C.AccountNumb > AND B.FirstDate = C.FirstDate > WHERE C.AccountNumb IS NULL > AND C.FirstDate IS NULL > AND C.EligNumber IS NULL > > DELETE FROM @TempTableT > > INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail 'Month Begin 20141202', > 'Month Begin 20150302' --Step2 > > DELETE @TempTableA > FROM @TempTableA B > LEFT JOIN > (SELECT A.* > FROM @TempTableA A > INNER JOIN @TempTableT T > ON A.EligNumber =T.EligNumber > AND A.AccountNumb = T.AccountNumb > AND A.FirstDate = T.FirstDate) C > ON B.EligNumber =C.EligNumber > AND B.AccountNumb = C.AccountNumb > AND B.FirstDate = C.FirstDate > WHERE C.AccountNumb IS NULL > AND C.FirstDate IS NULL > AND C.EligNumber IS NULL > > DELETE FROM @TempTableT > > INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail 'Month Begin 20141202', > 'Month Begin 20150403' --step3 > > DELETE @TempTableA > FROM @TempTableA B > LEFT JOIN > (SELECT A.* > FROM @TempTableA A > INNER JOIN @TempTableT T > ON A.EligNumber =T.EligNumber > AND A.AccountNumb = T.AccountNumb > AND A.FirstDate = T.FirstDate) C > ON B.EligNumber =C.EligNumber > AND B.AccountNumb = C.AccountNumb > AND B.FirstDate = C.FirstDate > WHERE C.AccountNumb IS NULL > AND C.FirstDate IS NULL > AND C.EligNumber IS NULL > DELETE FROM @TempTableT > > INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail 'Month Begin 20141202', > 'Month Begin 20150501' --step4 > > DELETE @TempTableA > FROM @TempTableA B > LEFT JOIN > (SELECT A.* > FROM @TempTableA A > INNER JOIN @TempTableT T > ON A.EligNumber =T.EligNumber > AND A.AccountNumb = T.AccountNumb > AND A.FirstDate = T.FirstDate) C > ON B.EligNumber =C.EligNumber > AND B.AccountNumb = C.AccountNumb > AND B.FirstDate = C.FirstDate > WHERE C.AccountNumb IS NULL > AND C.FirstDate IS NULL > AND C.EligNumber IS NULL > > SET NOCOUNT OFF > > SELECT * FROM @TempTableA --Final Select > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From davidmcafee at gmail.com Fri May 15 19:28:54 2015 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 May 2015 17:28:54 -0700 Subject: [dba-SQLServer] Looping through queries over and over In-Reply-To: References: Message-ID: I ended up doing in without cursors, using table variables and it does the whole year in less that 1 second: It could still use some clean up, but it's Friday and I am beat! CREATE PROCEDURE stpLossOfMediCal AS DECLARE @FirstBatch AS NVARCHAR(50), @SecondBatch AS NVARCHAR(50) SET @FirstBatch = 'Month Begin 20141130' --Will change this to pull from a table DECLARE @NumberRecords int, @RowCount int, @Batch AS NVARCHAR(50), @PreviousBatchName as NVARCHAR(50), @T CHAR(1) DECLARE @tblUniqueBatchNames TABLE(RowID int IDENTITY(1, 1), BatchName NVARCHAR(50)) DECLARE @tblBatchMatrix TABLE(RowID int IDENTITY(1, 1),T CHAR(1), B1 NVARCHAR(50), B2 NVARCHAR(50)) --Insert distinct batch names into temp table A INSERT INTO @tblUniqueBatchNames SELECT Provbatchname from tblCovivitas WITH (NOLOCK) WHERE ProvBatchName like 'Month Begin%' AND ProvBatchName>@FirstBatch GROUP BY ProvBatchName ORDER by ProvBatchName SET @NumberRecords = @@ROWCOUNT SET @RowCount = 1 --Create the Table Matrix for Temp table B: WHILE @RowCount <= @NumberRecords BEGIN SELECT @Batch = BatchName FROM @tblUniqueBatchNames WHERE RowID = @RowCount ORDER BY BatchName INSERT INTO @tblBatchMatrix SELECT @T, @Batch, batchname from @tblUniqueBatchNames WHERE BatchName>@Batch oRDER by BatchName SET @RowCount = @RowCount + 1 END --Update the first new batch record for b1 with an A, all other records with T UPDATE @tblBatchMatrix SET T = 'A' WHERE RowID IN (SELECT Min(RowID)FROM @tblBatchMatrix GROUP BY B1) UPDATE @tblBatchMatrix SET T = 'T' WHERE T IS NULL --Test Select --SELECT RowID, T,B1, B2 FROM @tblBatchMatrix ORDER BY B1, B2 --Now that we have the matrix made up, lets do the work: DECLARE @TempTableA TABLE(a bunch of Fields here) DECLARE @TempTableT TABLE(a bunch of Fields here) --Reset the Rowcount variable to 1 Set @RowCount = 1 WHILE @RowCount <= @NumberRecords BEGIN SELECT @T = T, @FirstBatch = B1, @SecondBatch = B2 FROM @tblBatchMatrix WHERE RowID = @RowCount ORDER BY RowID IF @T = 'A' BEGIN INSERT INTO @TempTableA EXEC stpLossOfMediCalDetail @FirstBatch, @SecondBatch END ELSE BEGIN INSERT INTO @TempTableT EXEC stpLossOfMediCalDetail @FirstBatch, @SecondBatch DELETE @TempTableA FROM @TempTableA B LEFT JOIN (SELECT A.* FROM @TempTableA A INNER JOIN @TempTableT T ON A.EligNumber =T.EligNumber AND A.AccountNumb = T.AccountNumb AND A.FirstDate = T.FirstDate) C ON B.EligNumber =C.EligNumber AND B.AccountNumb = C.AccountNumb AND B.FirstDate = C.FirstDate WHERE C.AccountNumb IS NULL AND C.FirstDate IS NULL AND C.EligNumber IS NULL DELETE FROM @TempTableT END SET @RowCount = @RowCount + 1 END SELECT * FROM @TempTableA --Not really * , I just put that to keep it smaller for the list order by EligNumber, LOCATION_NAME, RecipName, FirstDate, SecondDate I will look into the cube building for other projects that I am working on over here. Thanks, D From fuller.artful at gmail.com Fri May 15 19:37:52 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 May 2015 20:37:52 -0400 Subject: [dba-SQLServer] Looping through queries over and over In-Reply-To: References: Message-ID: Quite right, Francisco. I keep trying to figure out why people confuse OLTP with OLAP databases, and ask the former to do what is obviously the purpose of the latter. Add to this the emergent need to process massive amounts of input, which OLTP cannot handle, and the creation of such products as VoltDB, which can easily handle 50K TPS without even a hiccup, and now (not that everyone needs that speed) the game has become three layers thick. IMO, the NoSQL players are done, unless they totally change and revert to SQL as their syntax. I think that VoltDB has proved this. I confess that I am totally awestruck by VoltDB. Not everybody needs this instantaneous performance, but those who do would be well advised to check it out. That said, VoltDB is all about handling massive input in an ACID situation: 50K users buying stuff at once, or thousands of sensors pumping in many measurements per second; or an organization such as TicketMaster which has a very fixed number of tickets available for any given event; that sort of thing. But I'm also seeing that in-memory databases can also play into SMBs that do not require the horsepower that Amazon does. According to my initial tests on a database about 7GB,, VoltDB is 10 times faster than MS-SQL or MariaDB or Oracle. A. Francisco From fuller.artful at gmail.com Mon May 18 05:35:52 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 18 May 2015 06:35:52 -0400 Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 Message-ID: Anyone on this list use R? Microsoft recently purchased Revolutions Analytics and plans to implement the R language in a sandbox within SQL Server, thus eliminating the need to extract the data before doing deep analytics on it. A little more explanation can be found here . -- Arthur From gustav at cactus.dk Mon May 18 05:58:08 2015 From: gustav at cactus.dk (Gustav Brock) Date: Mon, 18 May 2015 10:58:08 +0000 Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 Message-ID: Hi Arthur Not me, but I've read about R before and located this tutorial: http://www.r-tutor.com/ Here they start with 1 + 2 (really): http://www.r-tutor.com/r-introduction /gustav -----Oprindelig meddelelse----- Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Arthur Fuller Sendt: 18. maj 2015 12:36 Til: Discussion concerning MS SQL Server Emne: [dba-SQLServer] In-database R coming to SQL Server 2016 Anyone on this list use R? Microsoft recently purchased Revolutions Analytics and plans to implement the R language in a sandbox within SQL Server, thus eliminating the need to extract the data before doing deep analytics on it. A little more explanation can be found here . -- Arthur From accessd at shaw.ca Tue May 19 12:33:15 2015 From: accessd at shaw.ca (Jim Lawrence) Date: Tue, 19 May 2015 11:33:15 -0600 (MDT) Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 In-Reply-To: Message-ID: <404313227.77415316.1432056795951.JavaMail.root@shaw.ca> Hi Arthur: Using the R language, in SQL would seem more like an add on. The language is very different than the SQL language....it produces graphics with animation from huge data sets, to my understanding. Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Monday, May 18, 2015 3:35:52 AM Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 Anyone on this list use R? Microsoft recently purchased Revolutions Analytics and plans to implement the R language in a sandbox within SQL Server, thus eliminating the need to extract the data before doing deep analytics on it. A little more explanation can be found here . -- Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Tue May 19 13:45:01 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 19 May 2015 14:45:01 -0400 Subject: [dba-SQLServer] SQL 2016 and JSON Message-ID: SQL Server 2016 will include full support for JSON. Click here for details. -- Arthur From fuller.artful at gmail.com Tue May 19 18:57:17 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 19 May 2015 19:57:17 -0400 Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 In-Reply-To: <404313227.77415316.1432056795951.JavaMail.root@shaw.ca> References: <404313227.77415316.1432056795951.JavaMail.root@shaw.ca> Message-ID: Jim, I agree. But the current situation requires that you first export the data set from SQL Server or whatever DB is your choice, and then use R to analyze the exported data. MS claims that implementing R in a sandbox inside SQL would eliminate that middle step. So I've read, anyway. On Tue, May 19, 2015 at 1:33 PM, Jim Lawrence wrote: > Hi Arthur: > > Using the R language, in SQL would seem more like an add on. The language > is very different than the SQL language....it produces graphics with > animation from huge data sets, to my understanding. > > Jim From pcs.accessd at gmail.com Wed May 20 00:33:35 2015 From: pcs.accessd at gmail.com (Borge Hansen) Date: Wed, 20 May 2015 13:33:35 +0800 Subject: [dba-SQLServer] A couple of quick questions Message-ID: Hi all, We are finally in the process of migrating an Access application backend having been in production since 2000 to SQL Db backend. At the same time we are upgrading the Frontend from Access2003 to Access2010. And the client is transferring from 2003 Terminal Server to 2012R2 Terminal Server (I think they call it something else now other than Terminal Server). We upsized to 2005 SQL Standard Edition Db using the Access2003 Upsizing wizard and looking out for the various gotchas. The SQL Backend is now in production on existing 2003 Server It is about to be transferred to SQL 2014 Express on new 2012R2 Server A couple of questions: *1*. Each table has lots of extended table properties - probably carried over from Access. Is there any need to keep these extended properties? *2*. I backed up a version of the 2005SQL production Db and restored on SQL2014 Express as a development / staging Db. In Db general properties the Db is reported as being 1390.31Mb in size with 80.61Mb available. The .mdf file is report as being 780Mb in initial size; The .ldf file as being 612Mb in initial size. I changed the auto growth from default 10Mb for the .mdf and 10% for the .ldf to 1,024Mb for each with maximum of 10,240Mb (10Gb - max size for a SQL Express Db). An incremental growth of 1Gb - is that best setting? Or what do you people suggest? *3*. I did another full backup from the SQL 2005 Db using SSMS. The .bak file is 727Mb. What happens to the .ldf log file during a full backup? At a high conceptual level I understand the function of the log file: It helps restore a Db to the point of last committed transaction before a crash by using a combination of the last full backup and the log file. When we do a full backup is the log file "reset" somehow or does it still keep a lot of history information? A log file can grow very big. Some say to never shrink the log file... What is the abc of dealing with / handling the log file? *4*. Anyone got a link to a good step by step walkthrough of how to do a restore using a full backup + existing log file to a specific point, i.e. just before a Db crash. Thanks, /borge From jlawrenc1 at shaw.ca Wed May 20 10:11:14 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Wed, 20 May 2015 09:11:14 -0600 (MDT) Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 In-Reply-To: Message-ID: <180088916.41377643.1432134674712.JavaMail.root@shaw.ca> Hi Arthur: That would be a savings... Has Microsoft released a beta or an alpha version yet? Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Tuesday, May 19, 2015 4:57:17 PM Subject: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Jim, I agree. But the current situation requires that you first export the data set from SQL Server or whatever DB is your choice, and then use R to analyze the exported data. MS claims that implementing R in a sandbox inside SQL would eliminate that middle step. So I've read, anyway. On Tue, May 19, 2015 at 1:33 PM, Jim Lawrence wrote: > Hi Arthur: > > Using the R language, in SQL would seem more like an add on. The language > is very different than the SQL language....it produces graphics with > animation from huge data sets, to my understanding. > > Jim _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jlawrenc1 at shaw.ca Wed May 20 10:14:01 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Wed, 20 May 2015 09:14:01 -0600 (MDT) Subject: [dba-SQLServer] SQL 2016 and JSON In-Reply-To: Message-ID: <1235744151.41380496.1432134841724.JavaMail.root@shaw.ca> Hi Arthur: That is interesting. It would seem that Microsoft is trying to make MS SQL the Swiss Army knife of data management. I womder how this is going to interface with Azure? Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Tuesday, May 19, 2015 11:45:01 AM Subject: [dba-SQLServer] SQL 2016 and JSON SQL Server 2016 will include full support for JSON. Click here for details. -- Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From gustav at cactus.dk Wed May 20 10:25:15 2015 From: gustav at cactus.dk (Gustav Brock) Date: Wed, 20 May 2015 15:25:15 +0000 Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 Message-ID: Hi Jim A preview is around the corner: http://www.microsoft.com/en-ca/server-cloud/products/sql-server-2016/ /gustav -----Oprindelig meddelelse----- Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Jim Lawrence Sendt: 20. maj 2015 17:11 Til: Discussion concerning MS SQL Server Emne: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Hi Arthur: That would be a savings... Has Microsoft released a beta or an alpha version yet? Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Tuesday, May 19, 2015 4:57:17 PM Subject: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Jim, I agree. But the current situation requires that you first export the data set from SQL Server or whatever DB is your choice, and then use R to analyze the exported data. MS claims that implementing R in a sandbox inside SQL would eliminate that middle step. So I've read, anyway. On Tue, May 19, 2015 at 1:33 PM, Jim Lawrence wrote: > Hi Arthur: > > Using the R language, in SQL would seem more like an add on. The > language is very different than the SQL language....it produces > graphics with animation from huge data sets, to my understanding. > > Jim From jlawrenc1 at shaw.ca Wed May 20 11:11:42 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Wed, 20 May 2015 10:11:42 -0600 (MDT) Subject: [dba-SQLServer] Michael Stonebraker on the obsolescence of Oracle, DB2 and SQL Server In-Reply-To: Message-ID: <354133841.41435282.1432138302419.JavaMail.root@shaw.ca> Hi Arthur: A great article. It seems that Stonebaker has accurately described the problems... ...but, from my observations, he has not given a solution. He does mention databases that run in RAM and laid out like arrays. I don't see the differences between arrays and tables as they both use rows and columns but maybe he defines arrays as rows and columns in memory. The problem I see is there is not yet any RAM that matches the storage of hard drives...not even close...even attempting such a system would cost an incredible price. Then he mentioned the performance of GPU processors. They are very fast for sure but the energy requirements are prohibitive. (A good friend has a BitCoin mining operation and he says his block of GPUs uses more power than the rest of his house...and he uses electric heat.) So. IMHO, I would guess we will have to wait for technology to catch up before Michael Stonebraker's predictions can be fully implemented. In the meantime, how has your RAM array based VoltDB been running? Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Friday, April 10, 2015 2:46:37 PM Subject: [dba-SQLServer] Michael Stonebraker on the obsolescence of Oracle, DB2 and SQL Server Database legend Michael Stonebraker explains why these databases are obsolete , and also the enormous challenge facing Facebook. -- Arthur _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jlawrenc1 at shaw.ca Wed May 20 11:31:33 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Wed, 20 May 2015 10:31:33 -0600 (MDT) Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 In-Reply-To: Message-ID: <416825066.41454037.1432139493450.JavaMail.root@shaw.ca> Hi Gustav: Thank you for that link. Jim ----- Original Message ----- From: "Gustav Brock" To: "Discussion concerning MS SQL Server" Sent: Wednesday, May 20, 2015 8:25:15 AM Subject: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Hi Jim A preview is around the corner: http://www.microsoft.com/en-ca/server-cloud/products/sql-server-2016/ /gustav -----Oprindelig meddelelse----- Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Jim Lawrence Sendt: 20. maj 2015 17:11 Til: Discussion concerning MS SQL Server Emne: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Hi Arthur: That would be a savings... Has Microsoft released a beta or an alpha version yet? Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Tuesday, May 19, 2015 4:57:17 PM Subject: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Jim, I agree. But the current situation requires that you first export the data set from SQL Server or whatever DB is your choice, and then use R to analyze the exported data. MS claims that implementing R in a sandbox inside SQL would eliminate that middle step. So I've read, anyway. On Tue, May 19, 2015 at 1:33 PM, Jim Lawrence wrote: > Hi Arthur: > > Using the R language, in SQL would seem more like an add on. The > language is very different than the SQL language....it produces > graphics with animation from huge data sets, to my understanding. > > Jim _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Wed May 20 12:52:17 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 20 May 2015 13:52:17 -0400 Subject: [dba-SQLServer] Michael Stonebraker on the obsolescence of Oracle, DB2 and SQL Server In-Reply-To: <354133841.41435282.1432138302419.JavaMail.root@shaw.ca> References: <354133841.41435282.1432138302419.JavaMail.root@shaw.ca> Message-ID: Jim, Perhaps you missed a couple of key things, which were not described in that brief interview but I've read all the docs. First and foremost, you scale the number of servers and their RAM to suit the db of interest. Second, the total RAM on all the servers is your workspace. IOW. 8 servers each armed with 64GB of RAM = 8 * 64 = 512 GB workspace. Not enough? Double up on the RAM on each server. Second, partitioning. You partitiion on a particular column. No need to describe the partitions in more detail, VoltDB takes care of that. Partitioning also applies to indexes, of course, but here's an interesting new (to me) idea: you can partition stored procedures as well. The advantage of partitioned stored procedures is that everything that needs to be there will be there within the partition. Not all tables need to be partitioned, of course, for example lookup tables; with those you use replication instead, placing a copy on each partition. So think of a 1/2 TB database sitting in RAM: no disk accesses at all except at startup and shutdown. And when you want to move the data from VoldTB to a disk-based system for OLAP etc., you don't even have to shut it down, just specify how often (in either transactions or temporal units) you want to save to disk. Even on my modest hardware, some of the sample programs are achieving 50k TPS, which is a far cry from anything SQL or Oracle are capable of. Arthur On Wed, May 20, 2015 at 12:11 PM, Jim Lawrence wrote: > Hi Arthur: > > A great article. It seems that Stonebaker has accurately described the > problems... > > ...but, from my observations, he has not given a solution. He does mention > databases that run in RAM and laid out like arrays. I don't see the > differences between arrays and tables as they both use rows and columns but > maybe he defines arrays as rows and columns in memory. The problem I see is > there is not yet any RAM that matches the storage of hard drives...not even > close...even attempting such a system would cost an incredible price. Then > he mentioned the performance of GPU processors. They are very fast for sure > but the energy requirements are prohibitive. (A good friend has a BitCoin > mining operation and he says his block of GPUs uses more power than the > rest of his house...and he uses electric heat.) > > So. IMHO, I would guess we will have to wait for technology to catch up > before Michael Stonebraker's predictions can be fully implemented. > > In the meantime, how has your RAM array based VoltDB been running? > > Jim From fhtapia at gmail.com Wed May 20 13:15:25 2015 From: fhtapia at gmail.com (fhtapia at gmail.com) Date: Wed, 20 May 2015 18:15:25 +0000 Subject: [dba-SQLServer] A couple of quick questions In-Reply-To: References: Message-ID: Hi Borge, Welcome to the group! hopefully we can all contribute to your questions :) 1) I haven't worked with Access upsized tables for quite some time, so hopefully someone else can chime in here if it's required by the FE GUI. 2) You are correct in thinking that you should not need to auto-grow your database, for performance reasons growing a database should occur under idle times, (So when is that?) that's you the admin, you review the database growth by monitoring it using any of the sql reports or 3rd party reporting. then you can guage monthly how often you want to grow your database. Your log file should grow as little as possible. In order to effectively use the point in time recovery you need to have regular transaction log backups. This ensures you will be able to recover your database to it's last known good state. Generally to recover a database you FIRST backup the transaction log. then step two would be to apply the last full backup, followed by your differential backups, and last your transaction log backups. you of course want to discriminate with the last transaction log backup file to specify the time to right before the moment of failure. So what size should the log file be? A general rule will be to make a default size of about 10% of the size of your mdf file. that's just a generic rule, but if you make 2x the size of the largest table in your mdf (data) file. you can bet that you will have very little growth occurring in the transaction log. 3) After a backup Sql server takes the committed transactions from the log file and writes them to the data file. if you are in simple mode this happens regularly on a checkpoint. If you have a mission critical transactional database you will want to backup the log on a regular basis. My setup is generally to backup the log when it reaches 60% of capacity. I write my backup files to a separate disk system, in case of a disk catastrophe. So this provides me with a regular log backup at roughly every 30 min for many of our systems. so under regular practice you see, once you've set your transaction log file, you will only need to grow your transaction log file once your data file reaches a larger capacity. With Sql Server, the database files are set, and they reuse any internal space, when available. So the file size you see on the disk is not the actual size of your data, which is why your bak files are a different size. 4) there are a few, I'll dig through some of my links and post back. Do you prefer a tsql command or via the GUI? On Tue, May 19, 2015 at 10:34 PM Borge Hansen wrote: > Hi all, > We are finally in the process of migrating an Access application backend > having been in production since 2000 to SQL Db backend. > At the same time we are upgrading the Frontend from Access2003 to > Access2010. > And the client is transferring from 2003 Terminal Server to 2012R2 Terminal > Server (I think they call it something else now other than Terminal > Server). > > We upsized to 2005 SQL Standard Edition Db using the Access2003 Upsizing > wizard and looking out for the various gotchas. > > The SQL Backend is now in production on existing 2003 Server > > It is about to be transferred to SQL 2014 Express on new 2012R2 Server > > A couple of questions: > > *1*. > Each table has lots of extended table properties - probably carried over > from Access. > Is there any need to keep these extended properties? > > *2*. > I backed up a version of the 2005SQL production Db and restored on SQL2014 > Express as a development / staging Db. > > In Db general properties the Db is reported as being 1390.31Mb in size with > 80.61Mb available. > The .mdf file is report as being 780Mb in initial size; > The .ldf file as being 612Mb in initial size. > > I changed the auto growth from default 10Mb for the .mdf and 10% for the > .ldf to 1,024Mb for each with maximum of 10,240Mb (10Gb - max size for a > SQL Express Db). > > An incremental growth of 1Gb - is that best setting? > Or what do you people suggest? > > *3*. > I did another full backup from the SQL 2005 Db using SSMS. The .bak file is > 727Mb. > > What happens to the .ldf log file during a full backup? > > At a high conceptual level I understand the function of the log file: It > helps restore a Db to the point of last committed transaction before a > crash by using a combination of the last full backup and the log file. > > When we do a full backup is the log file "reset" somehow or does it still > keep a lot of history information? > A log file can grow very big. > Some say to never shrink the log file... > What is the abc of dealing with / handling the log file? > > *4*. > Anyone got a link to a good step by step walkthrough of how to do a restore > using a full backup + existing log file to a specific point, i.e. just > before a Db crash. > > > Thanks, > /borge > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jlawrenc1 at shaw.ca Wed May 20 13:29:29 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Wed, 20 May 2015 12:29:29 -0600 (MDT) Subject: [dba-SQLServer] Michael Stonebraker on the obsolescence of Oracle, DB2 and SQL Server In-Reply-To: Message-ID: <2134296140.41550832.1432146569823.JavaMail.root@shaw.ca> Hi Arthur: Thanks for applying back. So you are saying that memory on a group of computers can be connected together similar to the way ZFS connects a group of computer's hard drives? That is very interesting technology. When a group of computers are so configured are they usable for any other purpose or are they dedicated? What happens when a database exceeds the memory available?...does it then just parce to the hard drive and keep running? What happens to performanace? THere are a hundred other questions that I could ask but I am sure they are answered in VoltDB documnetation. Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Wednesday, May 20, 2015 10:52:17 AM Subject: Re: [dba-SQLServer] Michael Stonebraker on the obsolescence of Oracle, DB2 and SQL Server Jim, Perhaps you missed a couple of key things, which were not described in that brief interview but I've read all the docs. First and foremost, you scale the number of servers and their RAM to suit the db of interest. Second, the total RAM on all the servers is your workspace. IOW. 8 servers each armed with 64GB of RAM = 8 * 64 = 512 GB workspace. Not enough? Double up on the RAM on each server. Second, partitioning. You partitiion on a particular column. No need to describe the partitions in more detail, VoltDB takes care of that. Partitioning also applies to indexes, of course, but here's an interesting new (to me) idea: you can partition stored procedures as well. The advantage of partitioned stored procedures is that everything that needs to be there will be there within the partition. Not all tables need to be partitioned, of course, for example lookup tables; with those you use replication instead, placing a copy on each partition. So think of a 1/2 TB database sitting in RAM: no disk accesses at all except at startup and shutdown. And when you want to move the data from VoldTB to a disk-based system for OLAP etc., you don't even have to shut it down, just specify how often (in either transactions or temporal units) you want to save to disk. Even on my modest hardware, some of the sample programs are achieving 50k TPS, which is a far cry from anything SQL or Oracle are capable of. Arthur On Wed, May 20, 2015 at 12:11 PM, Jim Lawrence wrote: > Hi Arthur: > > A great article. It seems that Stonebaker has accurately described the > problems... > > ...but, from my observations, he has not given a solution. He does mention > databases that run in RAM and laid out like arrays. I don't see the > differences between arrays and tables as they both use rows and columns but > maybe he defines arrays as rows and columns in memory. The problem I see is > there is not yet any RAM that matches the storage of hard drives...not even > close...even attempting such a system would cost an incredible price. Then > he mentioned the performance of GPU processors. They are very fast for sure > but the energy requirements are prohibitive. (A good friend has a BitCoin > mining operation and he says his block of GPUs uses more power than the > rest of his house...and he uses electric heat.) > > So. IMHO, I would guess we will have to wait for technology to catch up > before Michael Stonebraker's predictions can be fully implemented. > > In the meantime, how has your RAM array based VoltDB been running? > > Jim _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Wed May 20 14:17:47 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 20 May 2015 15:17:47 -0400 Subject: [dba-SQLServer] Michael Stonebraker on the obsolescence of Oracle, DB2 and SQL Server In-Reply-To: <2134296140.41550832.1432146569823.JavaMail.root@shaw.ca> References: <2134296140.41550832.1432146569823.JavaMail.root@shaw.ca> Message-ID: Jim, First things first. VoltDB is designed to handle massive input loads, which may or may not come from humans. Examples might include a Bloombereg stream of stock-market transactions, or maybe election results coming in from thousands of polling stations, or thousands of transactions on a web site such as Chapters Indigo or Amazon. In those cases, the sort of hardware I described earlier would definitely be required. Oh, I forgot to mention that lots of cores would help, the more the better. As to the db outgrowing the RAM available, It is the developer/DBA's responsibility to ensure that the DB never grows larger than the RAM available. As it starts to get close, add more servers and/or RAM. You can do both operations without having to bring the database down. Those kinds of organizations can afford the costs. But VoltDB is not only for such massive database input streams. In fact, there's a community version that can run in as little as 4GB of RAM. The largest db I ever managed as DBA was just shy of 50 GB, so a pair of servers each with 64 GB leaves lots of room for rapid growth. But I'm beginning to see a place for VoltDB in lots of situations. A chain of drugstores, say. How big can the Shoppers Drug Mart database be? Or think of the "average" app most of us have written, with 30 or 40 PCs running an Access app against a SQL back end. The community edition is cheap enough to make that thinkable, and the speed would blow the doors of the SQL equivalent. That would be FUN. On Wed, May 20, 2015 at 2:29 PM, Jim Lawrence wrote: > Hi Arthur: > > Thanks for applying back. > > So you are saying that memory on a group of computers can be connected > together similar to the way ZFS connects a group of computer's hard drives? > That is very interesting technology. When a group of computers are so > configured are they usable for any other purpose or are they dedicated? > > What happens when a database exceeds the memory available?...does it then > just parce to the hard drive and keep running? What happens to performanace? > > THere are a hundred other questions that I could ask but I am sure they > are answered in VoltDB documnetation. > > Jim > > ----- Original Message ----- > > From: "Arthur Fuller" > To: "Discussion concerning MS SQL Server" < > dba-sqlserver at databaseadvisors.com> > Sent: Wednesday, May 20, 2015 10:52:17 AM > Subject: Re: [dba-SQLServer] Michael Stonebraker on the obsolescence of > Oracle, DB2 and SQL Server > > Jim, > > Perhaps you missed a couple of key things, which were not described in that > brief interview but I've read all the docs. > > First and foremost, you scale the number of servers and their RAM to suit > the db of interest. Second, the total RAM on all the servers is your > workspace. IOW. 8 servers each armed with 64GB of RAM = 8 * 64 = 512 GB > workspace. Not enough? Double up on the RAM on each server. > > Second, partitioning. You partitiion on a particular column. No need to > describe the partitions in more detail, VoltDB takes care of that. > Partitioning also applies to indexes, of course, but here's an interesting > new (to me) idea: you can partition stored procedures as well. The > advantage of partitioned stored procedures is that everything that needs to > be there will be there within the partition. Not all tables need to be > partitioned, of course, for example lookup tables; with those you use > replication instead, placing a copy on each partition. > > So think of a 1/2 TB database sitting in RAM: no disk accesses at all > except at startup and shutdown. And when you want to move the data from > VoldTB to a disk-based system for OLAP etc., you don't even have to shut it > down, just specify how often (in either transactions or temporal units) you > want to save to disk. > > Even on my modest hardware, some of the sample programs are achieving 50k > TPS, which is a far cry from anything SQL or Oracle are capable of. > > Arthur > > On Wed, May 20, 2015 at 12:11 PM, Jim Lawrence wrote: > > > Hi Arthur: > > > > A great article. It seems that Stonebaker has accurately described the > > problems... > > > > ...but, from my observations, he has not given a solution. He does > mention > > databases that run in RAM and laid out like arrays. I don't see the > > differences between arrays and tables as they both use rows and columns > but > > maybe he defines arrays as rows and columns in memory. The problem I see > is > > there is not yet any RAM that matches the storage of hard drives...not > even > > close...even attempting such a system would cost an incredible price. > Then > > he mentioned the performance of GPU processors. They are very fast for > sure > > but the energy requirements are prohibitive. (A good friend has a BitCoin > > mining operation and he says his block of GPUs uses more power than the > > rest of his house...and he uses electric heat.) > > > > So. IMHO, I would guess we will have to wait for technology to catch up > > before Michael Stonebraker's predictions can be fully implemented. > > > > In the meantime, how has your RAM array based VoltDB been running? > > > > Jim > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- Arthur From fuller.artful at gmail.com Wed May 20 14:27:35 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 20 May 2015 15:27:35 -0400 Subject: [dba-SQLServer] SQL 2016 and JSON In-Reply-To: <1235744151.41380496.1432134841724.JavaMail.root@shaw.ca> References: <1235744151.41380496.1432134841724.JavaMail.root@shaw.ca> Message-ID: No idea, yet. But I would imagine that Azure has to be part of the plan. A. On Wed, May 20, 2015 at 11:14 AM, Jim Lawrence wrote: > Hi Arthur: > > That is interesting. It would seem that Microsoft is trying to make MS SQL > the Swiss Army knife of data management. I womder how this is going to > interface with Azure? > > Jim From fuller.artful at gmail.com Wed May 20 14:30:33 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 20 May 2015 15:30:33 -0400 Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 In-Reply-To: <416825066.41454037.1432139493450.JavaMail.root@shaw.ca> References: <416825066.41454037.1432139493450.JavaMail.root@shaw.ca> Message-ID: That's what I love about this group. Each of us mining different parts of the vein, all toward a common cause. Like a worldwide team! On Wed, May 20, 2015 at 12:31 PM, Jim Lawrence wrote: > Hi Gustav: > > Thank you for that link. > > Jim > > ----- Original Message ----- > > From: "Gustav Brock" > To: "Discussion concerning MS SQL Server" < > dba-sqlserver at databaseadvisors.com> > Sent: Wednesday, May 20, 2015 8:25:15 AM > Subject: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 > > Hi Jim > > A preview is around the corner: > > http://www.microsoft.com/en-ca/server-cloud/products/sql-server-2016/ > > /gustav > From jlawrenc1 at shaw.ca Wed May 20 14:40:06 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Wed, 20 May 2015 13:40:06 -0600 (MDT) Subject: [dba-SQLServer] Schemas for separation of clients? In-Reply-To: Message-ID: <252121716.41600178.1432150806761.JavaMail.root@shaw.ca> Hi Mark: I have just started to work with SSH and have only been using the basics so I can connect to my serves from anywhere. One way I have been using SSH is via Filezilla (the SSH client is built in). It is not the traditional package you would think of as mostly people use FZ as a secure desktop to webserver connection...but it is so much more than that. Below is a list of features I cut and pasted from an article on FZ: * Supports FTP, FTP over SSL/TLS (FTPS) and SSH File Transfer Protocol (SFTP) * IPv6 support * Available in more than 40 languages * Supports resume and transfer of large files >4GB * Easy to use Site Manager and transfer queue * Bookmarks * Drag & drop support * Speed limits * Filename filters * Directory comparison * Network configuration wizard * Remote file editing * Keep-alive * HTTP/1.1, SOCKS5 and FTP Proxy support * Logging to file * Synchronized directory browsing * Remote file search * Tabbed interface to connect to multiple servers Connecting to a database should be easy though I have not tried but here is a simple example/explanation of how to connect to a MySQL DB via a remote SSH connection that just port forwards to the DB: http://www.howtogeek.com/howto/ubuntu/access-your-mysql-server-remotely-over-ssh/ Of course it assumes the remote server is setup for host internet connections. MS SQL should be as easy to configure. Here is another link to an article on the concept of port-forwarding with SSH (which took me a bit of time to fully understand). http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html In a nutshell, once an appropriate SSH string is entered any database can be ran on a host server as if it is running locally on the client. You can also multiplex from your client computer to any number of remote databases, simultaneously...limited of course by your bandwidth. Of course the security and policies are limited only by your imagination and requirement (via ssh/config file). Word from the wise: Though it is temping do not go full bore encryption...once the key is lost, it is gone forever. PS: Another good article on setting up Port-forwarding and a few little work-arounds when issues arrive: http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html HTH As I become more familiar with the product I will post my insights here and please do the same. Jim ----- Original Message ----- From: "Mark Breen" To: "Discussion concerning MS SQL Server" Sent: Tuesday, March 31, 2015 6:23:39 AM Subject: Re: [dba-SQLServer] Schemas for separation of clients? Hello Jim Do you have any specific links to learn how yo use openssh on windows to establish a VPN. I can Google myself but just asking in case you have 'the perfect document'. On 31 Mar 2015 10:57, "Jim Lawrence" wrote: > Hi John: > > If you are thinking of going Cloud based, it may be an idea to try SSL. > OpenSSL is an excellent super secure VPN. It works on both Linux and there > is now a Windows version. I use Linux version all the time to connect to my > servers from anywhere. > > https://www.youtube.com/watch?v=FZyUX-LZHts > > Jim > > ----- Original Message ----- > From: "John W. Colby" > To: "Discussion concerning MS SQL Server" < > dba-sqlserver at databaseadvisors.com>, jwcolby at gmail.com > Sent: Monday, March 30, 2015 9:40:14 AM > Subject: Re: [dba-SQLServer] Schemas for separation of clients? > > I am actually talking about other databases, not the "database from hell". > > I have been following your AWS thread with interest. How do you get the > public IP address? I would think that would make the speeds much better > than trying to tunnel in using Hamachi. How do you deal with security / > hack attempts? Having that public facing IP has always put me off. > > John W. Colby > > On 3/30/2015 10:47 AM, Gustav Brock wrote: > > Hi John > > > > I have just set up a micro instance at AWS hosting SQL Server 2008 > Express and a public IP address. > > Then I can attach it directly via ODBC. > > We have only a 15 Mbit/s download, so speed is slower than from our > in-house SQL Servers but fully acceptable. > > > > At takes a little to set up the access to AWS. I skipped the VPN > offering but I may add that later when I find out how to do it. > > > > My need is very far from yours with a maximum record count per table of > some hundred thousands so I may never meet the issues you are dealing with. > > > > /gustav > > > > -----Oprindelig meddelelse----- > > Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] > P? vegne af John W. Colby > > Sendt: 30. marts 2015 15:35 > > Til: Discussion concerning MS SQL Server; jwcolby at gmail.com > > Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > > > Are you guys hitting the BE with an Access FE? If so how are you > linking to the tables? > > > > I am running Access linked tables over the internet to SQL Server on a > privately hosted VM / SQL Server. It runs just fine, though a bit slow. > OK very slow given how I design things. But it does work. In order to do > it however I set up a single user / password out in SQL Server, then come > into the VM using Hamachi. I think that Hamachi is one of the causes of > the slowness, though since that is the only way in for me, it is tough to > know exactly. > > > > Just wondering what you are up to and how to implement it. > > > > John W. Colby > > > > On 3/30/2015 9:18 AM, Gustav Brock wrote: > >> Hi Mark > >> > >> Perfect. I missed that. I have a t2.micro instance running now with > public access. > >> > >> I wonder if I should set up a VPN connection? It seems quite > straight-forward to at the AWS site but may create some challenges at my > site. > >> > >> /gustav > >> > >> -----Oprindelig meddelelse----- > >> Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] > >> P? vegne af Mark Breen > >> Sendt: 30. marts 2015 11:31 > >> Til: Discussion concerning MS SQL Server > >> Emne: Re: [dba-SQLServer] Schemas for separation of clients? > >> > >> Hello Gustav > >> > >> I have played with RDS but mostly I use ec2. For you I was suggesting > ec2 and SQL express. Then you can have multiple dB's. > >> > >> Mark > >> On 30 Mar 2015 16:28, "Gustav Brock" wrote: > >> > >>> Hi Mark > >>> > >>> Interesting. We've used AWS for years, but for storage only, so I was > >>> not up-to-date with their RDS offerings. It seems like it could be > >>> well suited for my purpose. > >>> > >>> I had to update our login options, and that seems for some reason to > >>> be a major task as they claim it can take up to 24 hours before > >>> settled - and until then no RDS service. I have to turn on some > patience ... > >>> > >>> /gustav > >>> > >>> -----Oprindelig meddelelse----- > >>> Fra: dba-SQLServer > >>> [mailto:dba-sqlserver-bounces at databaseadvisors.com] > >>> P? vegne af Mark Breen > >>> Sendt: 30. marts 2015 05:46 > >>> Til: Discussion concerning MS SQL Server > >>> Emne: Re: [dba-SQLServer] Schemas for separation of clients? > >>> > >>> Hello Gustav > >>> > >>> I have been using aws for two years now and love it. Amazon have > >>> some new micro machines (t2) that are really cheap and yet have some > power. > >>> Their costs are low enough yo consider for low budget projects yet > >>> offer all the quality of true best in class cloud. > >>> > >>> I usually do all my setting up on an enormous machine, then shutdown > >>> and switch to a micro instance and restart. That way I get to deal > >>> with great performance when in rdp and once I am done I pay pennies > per day. > >>> > >>> For me the absolute cost is secondary to the almost 100% likelihood > >>> my hardware will never fail. This reliability is what u am really > buying. > >>> > >>> As an aside, I have automated all my daily backups and transferred > >>> them all off machine to Amazon s3. > >>> > >>> Hth > >>> Mark > >>> On 30 Mar 2015 02:42, "Gustav Brock" wrote: > >>> > >>>> Hi Mark > >>>> > >>>> Good points. The added precautions and potential issues may very > >>>> well not be more "expensive" than the little money saved. > >>>> > >>>> /gustav > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jlawrenc1 at shaw.ca Wed May 20 16:06:58 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Wed, 20 May 2015 15:06:58 -0600 (MDT) Subject: [dba-SQLServer] SQL 2016 and JSON In-Reply-To: Message-ID: <644426785.41660280.1432156018427.JavaMail.root@shaw.ca> Hi Arthur: PS It is nice to see that XML is on it way out...well, maybe not on its way out but definitely being handed it cap. ;-) Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Wednesday, May 20, 2015 12:27:35 PM Subject: Re: [dba-SQLServer] SQL 2016 and JSON No idea, yet. But I would imagine that Azure has to be part of the plan. A. On Wed, May 20, 2015 at 11:14 AM, Jim Lawrence wrote: > Hi Arthur: > > That is interesting. It would seem that Microsoft is trying to make MS SQL > the Swiss Army knife of data management. I womder how this is going to > interface with Azure? > > Jim _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Wed May 20 16:38:44 2015 From: fhtapia at gmail.com (fhtapia at gmail.com) Date: Wed, 20 May 2015 21:38:44 +0000 Subject: [dba-SQLServer] Schemas for separation of clients? In-Reply-To: <252121716.41600178.1432150806761.JavaMail.root@shaw.ca> References: <252121716.41600178.1432150806761.JavaMail.root@shaw.ca> Message-ID: Jim, I've been using bitVise to setup the tunnel, then I can simply pull up mysqlworkbench (which off my mac doesn't require bitVise ssh). On Wed, May 20, 2015 at 12:41 PM Jim Lawrence wrote: > Hi Mark: > > I have just started to work with SSH and have only been using the basics > so I can connect to my serves from anywhere. > > One way I have been using SSH is via Filezilla (the SSH client is built > in). It is not the traditional package you would think of as mostly people > use FZ as a secure desktop to webserver connection...but it is so much more > than that. Below is a list of features I cut and pasted from an article on > FZ: > > * Supports FTP, FTP over SSL/TLS (FTPS) and SSH File Transfer Protocol > (SFTP) > * IPv6 support > * Available in more than 40 languages > * Supports resume and transfer of large files >4GB > * Easy to use Site Manager and transfer queue > * Bookmarks > * Drag & drop support > * Speed limits > * Filename filters > * Directory comparison > * Network configuration wizard > * Remote file editing > * Keep-alive > * HTTP/1.1, SOCKS5 and FTP Proxy support > * Logging to file > * Synchronized directory browsing > * Remote file search > * Tabbed interface to connect to multiple servers > > Connecting to a database should be easy though I have not tried but here > is a simple example/explanation of how to connect to a MySQL DB via a > remote SSH connection that just port forwards to the DB: > > > http://www.howtogeek.com/howto/ubuntu/access-your-mysql-server-remotely-over-ssh/ > > Of course it assumes the remote server is setup for host internet > connections. MS SQL should be as easy to configure. Here is another link to > an article on the concept of port-forwarding with SSH (which took me a bit > of time to fully understand). > > > http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html > > In a nutshell, once an appropriate SSH string is entered any database can > be ran on a host server as if it is running locally on the client. You can > also multiplex from your client computer to any number of remote databases, > simultaneously...limited of course by your bandwidth. Of course the > security and policies are limited only by your imagination and requirement > (via ssh/config file). Word from the wise: Though it is temping do not go > full bore encryption...once the key is lost, it is gone forever. > > PS: Another good article on setting up Port-forwarding and a few little > work-arounds when issues arrive: > > > http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html > > HTH > > As I become more familiar with the product I will post my insights here > and please do the same. > > Jim > > ----- Original Message ----- > > From: "Mark Breen" > To: "Discussion concerning MS SQL Server" < > dba-sqlserver at databaseadvisors.com> > Sent: Tuesday, March 31, 2015 6:23:39 AM > Subject: Re: [dba-SQLServer] Schemas for separation of clients? > > Hello Jim > > Do you have any specific links to learn how yo use openssh on windows to > establish a VPN. > > I can Google myself but just asking in case you have 'the perfect > document'. > On 31 Mar 2015 10:57, "Jim Lawrence" wrote: > > > Hi John: > > > > If you are thinking of going Cloud based, it may be an idea to try SSL. > > OpenSSL is an excellent super secure VPN. It works on both Linux and > there > > is now a Windows version. I use Linux version all the time to connect to > my > > servers from anywhere. > > > > https://www.youtube.com/watch?v=FZyUX-LZHts > > > > Jim > > > > ----- Original Message ----- > > From: "John W. Colby" > > To: "Discussion concerning MS SQL Server" < > > dba-sqlserver at databaseadvisors.com>, jwcolby at gmail.com > > Sent: Monday, March 30, 2015 9:40:14 AM > > Subject: Re: [dba-SQLServer] Schemas for separation of clients? > > > > I am actually talking about other databases, not the "database from > hell". > > > > I have been following your AWS thread with interest. How do you get the > > public IP address? I would think that would make the speeds much better > > than trying to tunnel in using Hamachi. How do you deal with security / > > hack attempts? Having that public facing IP has always put me off. > > > > John W. Colby > > > > On 3/30/2015 10:47 AM, Gustav Brock wrote: > > > Hi John > > > > > > I have just set up a micro instance at AWS hosting SQL Server 2008 > > Express and a public IP address. > > > Then I can attach it directly via ODBC. > > > We have only a 15 Mbit/s download, so speed is slower than from our > > in-house SQL Servers but fully acceptable. > > > > > > At takes a little to set up the access to AWS. I skipped the VPN > > offering but I may add that later when I find out how to do it. > > > > > > My need is very far from yours with a maximum record count per table of > > some hundred thousands so I may never meet the issues you are dealing > with. > > > > > > /gustav > > > > > > -----Oprindelig meddelelse----- > > > Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] > > P? vegne af John W. Colby > > > Sendt: 30. marts 2015 15:35 > > > Til: Discussion concerning MS SQL Server; jwcolby at gmail.com > > > Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > > > > > Are you guys hitting the BE with an Access FE? If so how are you > > linking to the tables? > > > > > > I am running Access linked tables over the internet to SQL Server on a > > privately hosted VM / SQL Server. It runs just fine, though a bit slow. > > OK very slow given how I design things. But it does work. In order to do > > it however I set up a single user / password out in SQL Server, then come > > into the VM using Hamachi. I think that Hamachi is one of the causes of > > the slowness, though since that is the only way in for me, it is tough to > > know exactly. > > > > > > Just wondering what you are up to and how to implement it. > > > > > > John W. Colby > > > > > > On 3/30/2015 9:18 AM, Gustav Brock wrote: > > >> Hi Mark > > >> > > >> Perfect. I missed that. I have a t2.micro instance running now with > > public access. > > >> > > >> I wonder if I should set up a VPN connection? It seems quite > > straight-forward to at the AWS site but may create some challenges at my > > site. > > >> > > >> /gustav > > >> > > >> -----Oprindelig meddelelse----- > > >> Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com > ] > > >> P? vegne af Mark Breen > > >> Sendt: 30. marts 2015 11:31 > > >> Til: Discussion concerning MS SQL Server > > >> Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > >> > > >> Hello Gustav > > >> > > >> I have played with RDS but mostly I use ec2. For you I was suggesting > > ec2 and SQL express. Then you can have multiple dB's. > > >> > > >> Mark > > >> On 30 Mar 2015 16:28, "Gustav Brock" wrote: > > >> > > >>> Hi Mark > > >>> > > >>> Interesting. We've used AWS for years, but for storage only, so I was > > >>> not up-to-date with their RDS offerings. It seems like it could be > > >>> well suited for my purpose. > > >>> > > >>> I had to update our login options, and that seems for some reason to > > >>> be a major task as they claim it can take up to 24 hours before > > >>> settled - and until then no RDS service. I have to turn on some > > patience ... > > >>> > > >>> /gustav > > >>> > > >>> -----Oprindelig meddelelse----- > > >>> Fra: dba-SQLServer > > >>> [mailto:dba-sqlserver-bounces at databaseadvisors.com] > > >>> P? vegne af Mark Breen > > >>> Sendt: 30. marts 2015 05:46 > > >>> Til: Discussion concerning MS SQL Server > > >>> Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > >>> > > >>> Hello Gustav > > >>> > > >>> I have been using aws for two years now and love it. Amazon have > > >>> some new micro machines (t2) that are really cheap and yet have some > > power. > > >>> Their costs are low enough yo consider for low budget projects yet > > >>> offer all the quality of true best in class cloud. > > >>> > > >>> I usually do all my setting up on an enormous machine, then shutdown > > >>> and switch to a micro instance and restart. That way I get to deal > > >>> with great performance when in rdp and once I am done I pay pennies > > per day. > > >>> > > >>> For me the absolute cost is secondary to the almost 100% likelihood > > >>> my hardware will never fail. This reliability is what u am really > > buying. > > >>> > > >>> As an aside, I have automated all my daily backups and transferred > > >>> them all off machine to Amazon s3. > > >>> > > >>> Hth > > >>> Mark > > >>> On 30 Mar 2015 02:42, "Gustav Brock" wrote: > > >>> > > >>>> Hi Mark > > >>>> > > >>>> Good points. The added precautions and potential issues may very > > >>>> well not be more "expensive" than the little money saved. > > >>>> > > >>>> /gustav > > > > > > _______________________________________________ > > > dba-SQLServer mailing list > > > dba-SQLServer at databaseadvisors.com > > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > > http://www.databaseadvisors.com > > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Thu May 21 05:24:54 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 21 May 2015 06:24:54 -0400 Subject: [dba-SQLServer] A couple of quick questions In-Reply-To: References: Message-ID: Hi Borge, and another welcome to the group. Since Stuart asked, I'll chime in with a note or two on the topic of extended properties. In general, their impact on the size of the database is negligible, so there's no reason to delete them. Instead, be thankful for whatever the upsizing wizard was able to bring in. More specifically, extended properties can be used to: a) specify captions for a table, view or column b) specify an input mask (i.e. for zip/postal codes and maybe phone numbers, part numbers, etc.) c) describe various objects (table, view, column) for use in data dictionaries and documentation Beyond that, I personally think it's a good idea to add useful ones that might not have been migrated. Whenever I create a new table or discover one without extended properties, I typically add new ones that I think that I will someday need. Here's a simple example of how to do that, cribbed from the documentation. The extended property value 'Minimum inventory quantity.' is added to the SafetyStockLevel column in the Product table that is contained in the Production schema: USE AdventureWorks2008R2; GO EXEC sys.sp_addextendedproperty @name = N'MS_DescriptionExample', @value = N'Minimum inventory quantity.', @level0type = N'SCHEMA', @level0name = Production, @level1type = N'TABLE', @level1name = Product, @level2type = N'COLUMN', @level2name = SafetyStockLevel; GO To display the extended properties, use the function fn_listextendedproperty . This function can display extended properties on a single database object or all objects in the database, based on the object type. For example, you can return the extended properties on a table or all columns in the table. The following example displays all extended properties set on the database itself. USE AdventureWorks2008R2; GO SELECT objtype, objname, name, value FROM fn_listextendedproperty(default, default, default, default, default, default, default); GO Hope this helps. Arthur On Wed, May 20, 2015 at 2:15 PM, wrote: > Hi Borge, > Welcome to the group! hopefully we can all contribute to your questions > :) > > 1) I haven't worked with Access upsized tables for quite some time, so > hopefully someone else can chime in here if it's required by the FE GUI. > 2) You are correct in thinking that you should not need to auto-grow your > database, for performance reasons growing a database should occur under > idle times, (So when is that?) that's you the admin, you review the > database growth by monitoring it using any of the sql reports or 3rd party > reporting. then you can guage monthly how often you want to grow your > database. > > Your log file should grow as little as possible. In order to effectively > use > the point in time recovery you need to have regular transaction log > backups. This ensures you will be able to recover your database to it's > last known good state. > > Generally to recover a database you FIRST backup the transaction log. then > step two would be to apply the last full backup, followed by your > differential backups, and last your transaction log backups. you of course > want to discriminate with the last transaction log backup file to specify > the time to right before the moment of failure. > > So what size should the log file be? A general rule will be to make a > default size of about 10% of the size of your mdf file. that's just a > generic rule, but if you make 2x the size of the largest table in your mdf > (data) file. you can bet that you will have very little growth occurring > in the transaction log. > > 3) After a backup Sql server takes the committed transactions from the log > file and writes them to the data file. if you are in simple mode this > happens regularly on a checkpoint. If you have a mission critical > transactional database you will want to backup the log on a regular basis. > My setup is generally to backup the log when it reaches 60% of capacity. I > write my backup files to a separate disk system, in case of a disk > catastrophe. So this provides me with a regular log backup at roughly > every 30 min for many of our systems. > > so under regular practice you see, once you've set your transaction log > file, you will only need to grow your transaction log file once your data > file reaches a larger capacity. > > With Sql Server, the database files are set, and they reuse any internal > space, when available. So the file size you see on the disk is not the > actual size of your data, which is why your bak files are a different size. > > 4) there are a few, I'll dig through some of my links and post back. Do > you prefer a tsql command or via the GUI? > > > On Tue, May 19, 2015 at 10:34 PM Borge Hansen > wrote: > > > Hi all, > > We are finally in the process of migrating an Access application backend > > having been in production since 2000 to SQL Db backend. > > At the same time we are upgrading the Frontend from Access2003 to > > Access2010. > > And the client is transferring from 2003 Terminal Server to 2012R2 > Terminal > > Server (I think they call it something else now other than Terminal > > Server). > > > > We upsized to 2005 SQL Standard Edition Db using the Access2003 Upsizing > > wizard and looking out for the various gotchas. > > > > The SQL Backend is now in production on existing 2003 Server > > > > It is about to be transferred to SQL 2014 Express on new 2012R2 Server > > > > A couple of questions: > > > > *1*. > > Each table has lots of extended table properties - probably carried over > > from Access. > > Is there any need to keep these extended properties? > > > > *2*. > > I backed up a version of the 2005SQL production Db and restored on > SQL2014 > > Express as a development / staging Db. > > > > In Db general properties the Db is reported as being 1390.31Mb in size > with > > 80.61Mb available. > > The .mdf file is report as being 780Mb in initial size; > > The .ldf file as being 612Mb in initial size. > > > > I changed the auto growth from default 10Mb for the .mdf and 10% for the > > .ldf to 1,024Mb for each with maximum of 10,240Mb (10Gb - max size for a > > SQL Express Db). > > > > An incremental growth of 1Gb - is that best setting? > > Or what do you people suggest? > > > > *3*. > > I did another full backup from the SQL 2005 Db using SSMS. The .bak file > is > > 727Mb. > > > > What happens to the .ldf log file during a full backup? > > > > At a high conceptual level I understand the function of the log file: It > > helps restore a Db to the point of last committed transaction before a > > crash by using a combination of the last full backup and the log file. > > > > When we do a full backup is the log file "reset" somehow or does it still > > keep a lot of history information? > > A log file can grow very big. > > Some say to never shrink the log file... > > What is the abc of dealing with / handling the log file? > > > > *4*. > > Anyone got a link to a good step by step walkthrough of how to do a > restore > > using a full backup + existing log file to a specific point, i.e. just > > before a Db crash. > > > > > > Thanks, > > /borge > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- Arthur From fuller.artful at gmail.com Thu May 21 05:54:03 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 21 May 2015 06:54:03 -0400 Subject: [dba-SQLServer] SQL 2016 and JSON In-Reply-To: <644426785.41660280.1432156018427.JavaMail.root@shaw.ca> References: <644426785.41660280.1432156018427.JavaMail.root@shaw.ca> Message-ID: Jim, I second that emotion! Whenever I look at XML, I think of Signal-to-Noise ratio. Arthur On Wed, May 20, 2015 at 5:06 PM, Jim Lawrence wrote: > Hi Arthur: > > PS It is nice to see that XML is on it way out...well, maybe not on its > way out but definitely being handed it cap. ;-) > > Jim From pcs.accessd at gmail.com Thu May 21 12:22:37 2015 From: pcs.accessd at gmail.com (Borge Hansen) Date: Fri, 22 May 2015 01:22:37 +0800 Subject: [dba-SQLServer] A couple of quick questions In-Reply-To: References: Message-ID: fhtapia, thanks a lot for taking the time to enlighten me a bit here. I will have to read up a bit more on this, so I can set up a proper and automated backup system of this fairly small but never the less LOB sql database. Links to tsql command and vis GUI are both welcome. /borge On Thu, May 21, 2015 at 2:15 AM, wrote: > Hi Borge, > Welcome to the group! hopefully we can all contribute to your questions > :) > > 1) I haven't worked with Access upsized tables for quite some time, so > hopefully someone else can chime in here if it's required by the FE GUI. > 2) You are correct in thinking that you should not need to auto-grow your > database, for performance reasons growing a database should occur under > idle times, (So when is that?) that's you the admin, you review the > database growth by monitoring it using any of the sql reports or 3rd party > reporting. then you can guage monthly how often you want to grow your > database. > > Your log file should grow as little as possible. In order to effectively > use > the point in time recovery you need to have regular transaction log > backups. This ensures you will be able to recover your database to it's > last known good state. > > Generally to recover a database you FIRST backup the transaction log. then > step two would be to apply the last full backup, followed by your > differential backups, and last your transaction log backups. you of course > want to discriminate with the last transaction log backup file to specify > the time to right before the moment of failure. > > So what size should the log file be? A general rule will be to make a > default size of about 10% of the size of your mdf file. that's just a > generic rule, but if you make 2x the size of the largest table in your mdf > (data) file. you can bet that you will have very little growth occurring > in the transaction log. > > 3) After a backup Sql server takes the committed transactions from the log > file and writes them to the data file. if you are in simple mode this > happens regularly on a checkpoint. If you have a mission critical > transactional database you will want to backup the log on a regular basis. > My setup is generally to backup the log when it reaches 60% of capacity. I > write my backup files to a separate disk system, in case of a disk > catastrophe. So this provides me with a regular log backup at roughly > every 30 min for many of our systems. > > so under regular practice you see, once you've set your transaction log > file, you will only need to grow your transaction log file once your data > file reaches a larger capacity. > > With Sql Server, the database files are set, and they reuse any internal > space, when available. So the file size you see on the disk is not the > actual size of your data, which is why your bak files are a different size. > > 4) there are a few, I'll dig through some of my links and post back. Do > you prefer a tsql command or via the GUI? > > > On Tue, May 19, 2015 at 10:34 PM Borge Hansen > wrote: > > > Hi all, > > We are finally in the process of migrating an Access application backend > > having been in production since 2000 to SQL Db backend. > > At the same time we are upgrading the Frontend from Access2003 to > > Access2010. > > And the client is transferring from 2003 Terminal Server to 2012R2 > Terminal > > Server (I think they call it something else now other than Terminal > > Server). > > > > We upsized to 2005 SQL Standard Edition Db using the Access2003 Upsizing > > wizard and looking out for the various gotchas. > > > > The SQL Backend is now in production on existing 2003 Server > > > > It is about to be transferred to SQL 2014 Express on new 2012R2 Server > > > > A couple of questions: > > > > *1*. > > Each table has lots of extended table properties - probably carried over > > from Access. > > Is there any need to keep these extended properties? > > > > *2*. > > I backed up a version of the 2005SQL production Db and restored on > SQL2014 > > Express as a development / staging Db. > > > > In Db general properties the Db is reported as being 1390.31Mb in size > with > > 80.61Mb available. > > The .mdf file is report as being 780Mb in initial size; > > The .ldf file as being 612Mb in initial size. > > > > I changed the auto growth from default 10Mb for the .mdf and 10% for the > > .ldf to 1,024Mb for each with maximum of 10,240Mb (10Gb - max size for a > > SQL Express Db). > > > > An incremental growth of 1Gb - is that best setting? > > Or what do you people suggest? > > > > *3*. > > I did another full backup from the SQL 2005 Db using SSMS. The .bak file > is > > 727Mb. > > > > What happens to the .ldf log file during a full backup? > > > > At a high conceptual level I understand the function of the log file: It > > helps restore a Db to the point of last committed transaction before a > > crash by using a combination of the last full backup and the log file. > > > > When we do a full backup is the log file "reset" somehow or does it still > > keep a lot of history information? > > A log file can grow very big. > > Some say to never shrink the log file... > > What is the abc of dealing with / handling the log file? > > > > *4*. > > Anyone got a link to a good step by step walkthrough of how to do a > restore > > using a full backup + existing log file to a specific point, i.e. just > > before a Db crash. > > > > > > Thanks, > > /borge > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From pcs.accessd at gmail.com Thu May 21 12:27:46 2015 From: pcs.accessd at gmail.com (Borge Hansen) Date: Fri, 22 May 2015 01:27:46 +0800 Subject: [dba-SQLServer] A couple of quick questions In-Reply-To: References: Message-ID: Thanks Arthur for adding your insights to Stewart's response... Will keep those extended table properties. /borge On Thu, May 21, 2015 at 6:24 PM, Arthur Fuller wrote: > Hi Borge, and another welcome to the group. > > Since Stuart asked, I'll chime in with a note or two on the topic of > extended properties. > > In general, their impact on the size of the database is negligible, so > there's no reason to delete them. Instead, be thankful for whatever the > upsizing wizard was able to bring in. > > More specifically, extended properties can be used to: > a) specify captions for a table, view or column > b) specify an input mask (i.e. for zip/postal codes and maybe phone > numbers, part numbers, etc.) > c) describe various objects (table, view, column) for use in data > dictionaries and documentation > > Beyond that, I personally think it's a good idea to add useful ones that > might not have been migrated. Whenever I create a new table or discover one > without extended properties, I typically add new ones that I think that I > will someday need. > > Here's a simple example of how to do that, cribbed from the documentation. > The extended property value 'Minimum inventory quantity.' is added to the > SafetyStockLevel column in the Product table that is contained in the > Production > schema: > > > > USE AdventureWorks2008R2; > GO > EXEC sys.sp_addextendedproperty > @name = N'MS_DescriptionExample', > @value = N'Minimum inventory quantity.', > @level0type = N'SCHEMA', @level0name = Production, > @level1type = N'TABLE', @level1name = Product, > @level2type = N'COLUMN', @level2name = SafetyStockLevel; > GO > > > > To display the extended properties, use the function > fn_listextendedproperty > >. > This function can display extended properties on a single database object > or > all objects in the database, based on the object type. For example, you > can return the extended properties on a table or all columns in the > table. > > The following example displays all extended properties set on the database > itself. > > > > USE AdventureWorks2008R2; > GO > SELECT objtype, objname, name, value > FROM fn_listextendedproperty(default, default, default, default, > default, default, default); > GO > > > > Hope this helps. > > Arthur > > > On Wed, May 20, 2015 at 2:15 PM, wrote: > > > Hi Borge, > > Welcome to the group! hopefully we can all contribute to your > questions > > :) > > > > 1) I haven't worked with Access upsized tables for quite some time, so > > hopefully someone else can chime in here if it's required by the FE GUI. > > 2) You are correct in thinking that you should not need to auto-grow your > > database, for performance reasons growing a database should occur under > > idle times, (So when is that?) that's you the admin, you review the > > database growth by monitoring it using any of the sql reports or 3rd > party > > reporting. then you can guage monthly how often you want to grow your > > database. > > > > Your log file should grow as little as possible. In order to effectively > > use > > the point in time recovery you need to have regular transaction log > > backups. This ensures you will be able to recover your database to it's > > last known good state. > > > > Generally to recover a database you FIRST backup the transaction log. > then > > step two would be to apply the last full backup, followed by your > > differential backups, and last your transaction log backups. you of > course > > want to discriminate with the last transaction log backup file to specify > > the time to right before the moment of failure. > > > > So what size should the log file be? A general rule will be to make a > > default size of about 10% of the size of your mdf file. that's just a > > generic rule, but if you make 2x the size of the largest table in your > mdf > > (data) file. you can bet that you will have very little growth occurring > > in the transaction log. > > > > 3) After a backup Sql server takes the committed transactions from the > log > > file and writes them to the data file. if you are in simple mode this > > happens regularly on a checkpoint. If you have a mission critical > > transactional database you will want to backup the log on a regular > basis. > > My setup is generally to backup the log when it reaches 60% of > capacity. I > > write my backup files to a separate disk system, in case of a disk > > catastrophe. So this provides me with a regular log backup at roughly > > every 30 min for many of our systems. > > > > so under regular practice you see, once you've set your transaction log > > file, you will only need to grow your transaction log file once your data > > file reaches a larger capacity. > > > > With Sql Server, the database files are set, and they reuse any internal > > space, when available. So the file size you see on the disk is not the > > actual size of your data, which is why your bak files are a different > size. > > > > 4) there are a few, I'll dig through some of my links and post back. Do > > you prefer a tsql command or via the GUI? > > > > > > On Tue, May 19, 2015 at 10:34 PM Borge Hansen > > wrote: > > > > > Hi all, > > > We are finally in the process of migrating an Access application > backend > > > having been in production since 2000 to SQL Db backend. > > > At the same time we are upgrading the Frontend from Access2003 to > > > Access2010. > > > And the client is transferring from 2003 Terminal Server to 2012R2 > > Terminal > > > Server (I think they call it something else now other than Terminal > > > Server). > > > > > > We upsized to 2005 SQL Standard Edition Db using the Access2003 > Upsizing > > > wizard and looking out for the various gotchas. > > > > > > The SQL Backend is now in production on existing 2003 Server > > > > > > It is about to be transferred to SQL 2014 Express on new 2012R2 Server > > > > > > A couple of questions: > > > > > > *1*. > > > Each table has lots of extended table properties - probably carried > over > > > from Access. > > > Is there any need to keep these extended properties? > > > > > > *2*. > > > I backed up a version of the 2005SQL production Db and restored on > > SQL2014 > > > Express as a development / staging Db. > > > > > > In Db general properties the Db is reported as being 1390.31Mb in size > > with > > > 80.61Mb available. > > > The .mdf file is report as being 780Mb in initial size; > > > The .ldf file as being 612Mb in initial size. > > > > > > I changed the auto growth from default 10Mb for the .mdf and 10% for > the > > > .ldf to 1,024Mb for each with maximum of 10,240Mb (10Gb - max size for > a > > > SQL Express Db). > > > > > > An incremental growth of 1Gb - is that best setting? > > > Or what do you people suggest? > > > > > > *3*. > > > I did another full backup from the SQL 2005 Db using SSMS. The .bak > file > > is > > > 727Mb. > > > > > > What happens to the .ldf log file during a full backup? > > > > > > At a high conceptual level I understand the function of the log file: > It > > > helps restore a Db to the point of last committed transaction before a > > > crash by using a combination of the last full backup and the log file. > > > > > > When we do a full backup is the log file "reset" somehow or does it > still > > > keep a lot of history information? > > > A log file can grow very big. > > > Some say to never shrink the log file... > > > What is the abc of dealing with / handling the log file? > > > > > > *4*. > > > Anyone got a link to a good step by step walkthrough of how to do a > > restore > > > using a full backup + existing log file to a specific point, i.e. just > > > before a Db crash. > > > > > > > > > Thanks, > > > /borge > > > _______________________________________________ > > > dba-SQLServer mailing list > > > dba-SQLServer at databaseadvisors.com > > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > > http://www.databaseadvisors.com > > > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > -- > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jlawrenc1 at shaw.ca Thu May 21 12:52:18 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Thu, 21 May 2015 11:52:18 -0600 (MDT) Subject: [dba-SQLServer] Schemas for separation of clients? In-Reply-To: <1305624781.42277087.1432230090432.JavaMail.root@shaw.ca> Message-ID: <183555179.42285397.1432230738597.JavaMail.root@shaw.ca> That sounds like an excellent product...I had not heard of it before but now it will be added to link list. :-) Jim From: fhtapia at gmail.com To: "Discussion concerning MS SQL Server" Sent: Wednesday, May 20, 2015 2:38:44 PM Subject: Re: [dba-SQLServer] Schemas for separation of clients? Jim, I've been using bitVise to setup the tunnel, then I can simply pull up mysqlworkbench (which off my mac doesn't require bitVise ssh). From fuller.artful at gmail.com Thu May 21 13:58:15 2015 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 21 May 2015 14:58:15 -0400 Subject: [dba-SQLServer] A couple of quick questions In-Reply-To: References: Message-ID: Oops! Sorry, Francisco, I attributed your comments to Stuart by mistake. Borge, the "problem" is that so many people here know what they're doing :) Arthur On Thu, May 21, 2015 at 1:27 PM, Borge Hansen wrote: > Thanks Arthur for adding your insights to Stewart's response... > Will keep those extended table properties. > /borge From jlawrenc1 at shaw.ca Thu May 21 12:41:30 2015 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Thu, 21 May 2015 11:41:30 -0600 (MDT) Subject: [dba-SQLServer] Schemas for separation of clients? In-Reply-To: Message-ID: <1305624781.42277087.1432230090432.JavaMail.root@shaw.ca> That sounds like an excellent product...I had not heard of it before but now it will be added to link list. :-) Jim ----- Original Message ----- From: fhtapia at gmail.com To: "Discussion concerning MS SQL Server" Sent: Wednesday, May 20, 2015 2:38:44 PM Subject: Re: [dba-SQLServer] Schemas for separation of clients? Jim, I've been using bitVise to setup the tunnel, then I can simply pull up mysqlworkbench (which off my mac doesn't require bitVise ssh). On Wed, May 20, 2015 at 12:41 PM Jim Lawrence wrote: > Hi Mark: > > I have just started to work with SSH and have only been using the basics > so I can connect to my serves from anywhere. > > One way I have been using SSH is via Filezilla (the SSH client is built > in). It is not the traditional package you would think of as mostly people > use FZ as a secure desktop to webserver connection...but it is so much more > than that. Below is a list of features I cut and pasted from an article on > FZ: > > * Supports FTP, FTP over SSL/TLS (FTPS) and SSH File Transfer Protocol > (SFTP) > * IPv6 support > * Available in more than 40 languages > * Supports resume and transfer of large files >4GB > * Easy to use Site Manager and transfer queue > * Bookmarks > * Drag & drop support > * Speed limits > * Filename filters > * Directory comparison > * Network configuration wizard > * Remote file editing > * Keep-alive > * HTTP/1.1, SOCKS5 and FTP Proxy support > * Logging to file > * Synchronized directory browsing > * Remote file search > * Tabbed interface to connect to multiple servers > > Connecting to a database should be easy though I have not tried but here > is a simple example/explanation of how to connect to a MySQL DB via a > remote SSH connection that just port forwards to the DB: > > > http://www.howtogeek.com/howto/ubuntu/access-your-mysql-server-remotely-over-ssh/ > > Of course it assumes the remote server is setup for host internet > connections. MS SQL should be as easy to configure. Here is another link to > an article on the concept of port-forwarding with SSH (which took me a bit > of time to fully understand). > > > http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html > > In a nutshell, once an appropriate SSH string is entered any database can > be ran on a host server as if it is running locally on the client. You can > also multiplex from your client computer to any number of remote databases, > simultaneously...limited of course by your bandwidth. Of course the > security and policies are limited only by your imagination and requirement > (via ssh/config file). Word from the wise: Though it is temping do not go > full bore encryption...once the key is lost, it is gone forever. > > PS: Another good article on setting up Port-forwarding and a few little > work-arounds when issues arrive: > > > http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html > > HTH > > As I become more familiar with the product I will post my insights here > and please do the same. > > Jim > > ----- Original Message ----- > > From: "Mark Breen" > To: "Discussion concerning MS SQL Server" < > dba-sqlserver at databaseadvisors.com> > Sent: Tuesday, March 31, 2015 6:23:39 AM > Subject: Re: [dba-SQLServer] Schemas for separation of clients? > > Hello Jim > > Do you have any specific links to learn how yo use openssh on windows to > establish a VPN. > > I can Google myself but just asking in case you have 'the perfect > document'. > On 31 Mar 2015 10:57, "Jim Lawrence" wrote: > > > Hi John: > > > > If you are thinking of going Cloud based, it may be an idea to try SSL. > > OpenSSL is an excellent super secure VPN. It works on both Linux and > there > > is now a Windows version. I use Linux version all the time to connect to > my > > servers from anywhere. > > > > https://www.youtube.com/watch?v=FZyUX-LZHts > > > > Jim > > > > ----- Original Message ----- > > From: "John W. Colby" > > To: "Discussion concerning MS SQL Server" < > > dba-sqlserver at databaseadvisors.com>, jwcolby at gmail.com > > Sent: Monday, March 30, 2015 9:40:14 AM > > Subject: Re: [dba-SQLServer] Schemas for separation of clients? > > > > I am actually talking about other databases, not the "database from > hell". > > > > I have been following your AWS thread with interest. How do you get the > > public IP address? I would think that would make the speeds much better > > than trying to tunnel in using Hamachi. How do you deal with security / > > hack attempts? Having that public facing IP has always put me off. > > > > John W. Colby > > > > On 3/30/2015 10:47 AM, Gustav Brock wrote: > > > Hi John > > > > > > I have just set up a micro instance at AWS hosting SQL Server 2008 > > Express and a public IP address. > > > Then I can attach it directly via ODBC. > > > We have only a 15 Mbit/s download, so speed is slower than from our > > in-house SQL Servers but fully acceptable. > > > > > > At takes a little to set up the access to AWS. I skipped the VPN > > offering but I may add that later when I find out how to do it. > > > > > > My need is very far from yours with a maximum record count per table of > > some hundred thousands so I may never meet the issues you are dealing > with. > > > > > > /gustav > > > > > > -----Oprindelig meddelelse----- > > > Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] > > P? vegne af John W. Colby > > > Sendt: 30. marts 2015 15:35 > > > Til: Discussion concerning MS SQL Server; jwcolby at gmail.com > > > Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > > > > > Are you guys hitting the BE with an Access FE? If so how are you > > linking to the tables? > > > > > > I am running Access linked tables over the internet to SQL Server on a > > privately hosted VM / SQL Server. It runs just fine, though a bit slow. > > OK very slow given how I design things. But it does work. In order to do > > it however I set up a single user / password out in SQL Server, then come > > into the VM using Hamachi. I think that Hamachi is one of the causes of > > the slowness, though since that is the only way in for me, it is tough to > > know exactly. > > > > > > Just wondering what you are up to and how to implement it. > > > > > > John W. Colby > > > > > > On 3/30/2015 9:18 AM, Gustav Brock wrote: > > >> Hi Mark > > >> > > >> Perfect. I missed that. I have a t2.micro instance running now with > > public access. > > >> > > >> I wonder if I should set up a VPN connection? It seems quite > > straight-forward to at the AWS site but may create some challenges at my > > site. > > >> > > >> /gustav > > >> > > >> -----Oprindelig meddelelse----- > > >> Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com > ] > > >> P? vegne af Mark Breen > > >> Sendt: 30. marts 2015 11:31 > > >> Til: Discussion concerning MS SQL Server > > >> Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > >> > > >> Hello Gustav > > >> > > >> I have played with RDS but mostly I use ec2. For you I was suggesting > > ec2 and SQL express. Then you can have multiple dB's. > > >> > > >> Mark > > >> On 30 Mar 2015 16:28, "Gustav Brock" wrote: > > >> > > >>> Hi Mark > > >>> > > >>> Interesting. We've used AWS for years, but for storage only, so I was > > >>> not up-to-date with their RDS offerings. It seems like it could be > > >>> well suited for my purpose. > > >>> > > >>> I had to update our login options, and that seems for some reason to > > >>> be a major task as they claim it can take up to 24 hours before > > >>> settled - and until then no RDS service. I have to turn on some > > patience ... > > >>> > > >>> /gustav > > >>> > > >>> -----Oprindelig meddelelse----- > > >>> Fra: dba-SQLServer > > >>> [mailto:dba-sqlserver-bounces at databaseadvisors.com] > > >>> P? vegne af Mark Breen > > >>> Sendt: 30. marts 2015 05:46 > > >>> Til: Discussion concerning MS SQL Server > > >>> Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > >>> > > >>> Hello Gustav > > >>> > > >>> I have been using aws for two years now and love it. Amazon have > > >>> some new micro machines (t2) that are really cheap and yet have some > > power. > > >>> Their costs are low enough yo consider for low budget projects yet > > >>> offer all the quality of true best in class cloud. > > >>> > > >>> I usually do all my setting up on an enormous machine, then shutdown > > >>> and switch to a micro instance and restart. That way I get to deal > > >>> with great performance when in rdp and once I am done I pay pennies > > per day. > > >>> > > >>> For me the absolute cost is secondary to the almost 100% likelihood > > >>> my hardware will never fail. This reliability is what u am really > > buying. > > >>> > > >>> As an aside, I have automated all my daily backups and transferred > > >>> them all off machine to Amazon s3. > > >>> > > >>> Hth > > >>> Mark > > >>> On 30 Mar 2015 02:42, "Gustav Brock" wrote: > > >>> > > >>>> Hi Mark > > >>>> > > >>>> Good points. The added precautions and potential issues may very > > >>>> well not be more "expensive" than the little money saved. > > >>>> > > >>>> /gustav > > > > > > _______________________________________________ > > > dba-SQLServer mailing list > > > dba-SQLServer at databaseadvisors.com > > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > > http://www.databaseadvisors.com > > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Mon May 25 11:16:36 2015 From: accessd at shaw.ca (Jim Lawrence) Date: Mon, 25 May 2015 10:16:36 -0600 (MDT) Subject: [dba-SQLServer] In-database R coming to SQL Server 2016 In-Reply-To: Message-ID: <1209659526.81849026.1432570596560.JavaMail.root@shaw.ca> Hi Gustav: I will definitely be having a closer look when the beta is released. Jim ----- Original Message ----- From: "Gustav Brock" To: "Discussion concerning MS SQL Server" Sent: Wednesday, May 20, 2015 8:25:15 AM Subject: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Hi Jim A preview is around the corner: http://www.microsoft.com/en-ca/server-cloud/products/sql-server-2016/ /gustav -----Oprindelig meddelelse----- Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] P? vegne af Jim Lawrence Sendt: 20. maj 2015 17:11 Til: Discussion concerning MS SQL Server Emne: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Hi Arthur: That would be a savings... Has Microsoft released a beta or an alpha version yet? Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Tuesday, May 19, 2015 4:57:17 PM Subject: Re: [dba-SQLServer] In-database R coming to SQL Server 2016 Jim, I agree. But the current situation requires that you first export the data set from SQL Server or whatever DB is your choice, and then use R to analyze the exported data. MS claims that implementing R in a sandbox inside SQL would eliminate that middle step. So I've read, anyway. On Tue, May 19, 2015 at 1:33 PM, Jim Lawrence wrote: > Hi Arthur: > > Using the R language, in SQL would seem more like an add on. The > language is very different than the SQL language....it produces > graphics with animation from huge data sets, to my understanding. > > Jim _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Mon May 25 11:37:29 2015 From: accessd at shaw.ca (Jim Lawrence) Date: Mon, 25 May 2015 10:37:29 -0600 (MDT) Subject: [dba-SQLServer] Michael Stonebraker on the obsolescence of Oracle, DB2 and SQL Server In-Reply-To: Message-ID: <706385386.81866491.1432571849038.JavaMail.root@shaw.ca> Hi Arthur: I have installed Docker on one of my servers and have tested a number of small containers. I then decided to download and install the VoltDB demo container and see how the product would run. It did not as there were some issues within the container, apparently not all dependencies were resolved before the compiling was done. I complained but there was a few ahead of me. Now we will wait until there is a new compile out. Jim ----- Original Message ----- From: "Arthur Fuller" To: "Discussion concerning MS SQL Server" Sent: Wednesday, May 20, 2015 10:52:17 AM Subject: Re: [dba-SQLServer] Michael Stonebraker on the obsolescence of Oracle, DB2 and SQL Server Jim, Perhaps you missed a couple of key things, which were not described in that brief interview but I've read all the docs. First and foremost, you scale the number of servers and their RAM to suit the db of interest. Second, the total RAM on all the servers is your workspace. IOW. 8 servers each armed with 64GB of RAM = 8 * 64 = 512 GB workspace. Not enough? Double up on the RAM on each server. Second, partitioning. You partitiion on a particular column. No need to describe the partitions in more detail, VoltDB takes care of that. Partitioning also applies to indexes, of course, but here's an interesting new (to me) idea: you can partition stored procedures as well. The advantage of partitioned stored procedures is that everything that needs to be there will be there within the partition. Not all tables need to be partitioned, of course, for example lookup tables; with those you use replication instead, placing a copy on each partition. So think of a 1/2 TB database sitting in RAM: no disk accesses at all except at startup and shutdown. And when you want to move the data from VoldTB to a disk-based system for OLAP etc., you don't even have to shut it down, just specify how often (in either transactions or temporal units) you want to save to disk. Even on my modest hardware, some of the sample programs are achieving 50k TPS, which is a far cry from anything SQL or Oracle are capable of. Arthur On Wed, May 20, 2015 at 12:11 PM, Jim Lawrence wrote: > Hi Arthur: > > A great article. It seems that Stonebaker has accurately described the > problems... > > ...but, from my observations, he has not given a solution. He does mention > databases that run in RAM and laid out like arrays. I don't see the > differences between arrays and tables as they both use rows and columns but > maybe he defines arrays as rows and columns in memory. The problem I see is > there is not yet any RAM that matches the storage of hard drives...not even > close...even attempting such a system would cost an incredible price. Then > he mentioned the performance of GPU processors. They are very fast for sure > but the energy requirements are prohibitive. (A good friend has a BitCoin > mining operation and he says his block of GPUs uses more power than the > rest of his house...and he uses electric heat.) > > So. IMHO, I would guess we will have to wait for technology to catch up > before Michael Stonebraker's predictions can be fully implemented. > > In the meantime, how has your RAM array based VoltDB been running? > > Jim _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Mon May 25 12:27:25 2015 From: accessd at shaw.ca (Jim Lawrence) Date: Mon, 25 May 2015 11:27:25 -0600 (MDT) Subject: [dba-SQLServer] Schemas for separation of clients? In-Reply-To: Message-ID: <1147977171.81905469.1432574845830.JavaMail.root@shaw.ca> It sounds like a good product. It has a GUI that runs on all platforms which is a definite plus and that is what you are paying for. Jim ----- Original Message ----- From: fhtapia at gmail.com To: "Discussion concerning MS SQL Server" Sent: Wednesday, May 20, 2015 2:38:44 PM Subject: Re: [dba-SQLServer] Schemas for separation of clients? Jim, I've been using bitVise to setup the tunnel, then I can simply pull up mysqlworkbench (which off my mac doesn't require bitVise ssh). On Wed, May 20, 2015 at 12:41 PM Jim Lawrence wrote: > Hi Mark: > > I have just started to work with SSH and have only been using the basics > so I can connect to my serves from anywhere. > > One way I have been using SSH is via Filezilla (the SSH client is built > in). It is not the traditional package you would think of as mostly people > use FZ as a secure desktop to webserver connection...but it is so much more > than that. Below is a list of features I cut and pasted from an article on > FZ: > > * Supports FTP, FTP over SSL/TLS (FTPS) and SSH File Transfer Protocol > (SFTP) > * IPv6 support > * Available in more than 40 languages > * Supports resume and transfer of large files >4GB > * Easy to use Site Manager and transfer queue > * Bookmarks > * Drag & drop support > * Speed limits > * Filename filters > * Directory comparison > * Network configuration wizard > * Remote file editing > * Keep-alive > * HTTP/1.1, SOCKS5 and FTP Proxy support > * Logging to file > * Synchronized directory browsing > * Remote file search > * Tabbed interface to connect to multiple servers > > Connecting to a database should be easy though I have not tried but here > is a simple example/explanation of how to connect to a MySQL DB via a > remote SSH connection that just port forwards to the DB: > > > http://www.howtogeek.com/howto/ubuntu/access-your-mysql-server-remotely-over-ssh/ > > Of course it assumes the remote server is setup for host internet > connections. MS SQL should be as easy to configure. Here is another link to > an article on the concept of port-forwarding with SSH (which took me a bit > of time to fully understand). > > > http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html > > In a nutshell, once an appropriate SSH string is entered any database can > be ran on a host server as if it is running locally on the client. You can > also multiplex from your client computer to any number of remote databases, > simultaneously...limited of course by your bandwidth. Of course the > security and policies are limited only by your imagination and requirement > (via ssh/config file). Word from the wise: Though it is temping do not go > full bore encryption...once the key is lost, it is gone forever. > > PS: Another good article on setting up Port-forwarding and a few little > work-arounds when issues arrive: > > > http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html > > HTH > > As I become more familiar with the product I will post my insights here > and please do the same. > > Jim > > ----- Original Message ----- > > From: "Mark Breen" > To: "Discussion concerning MS SQL Server" < > dba-sqlserver at databaseadvisors.com> > Sent: Tuesday, March 31, 2015 6:23:39 AM > Subject: Re: [dba-SQLServer] Schemas for separation of clients? > > Hello Jim > > Do you have any specific links to learn how yo use openssh on windows to > establish a VPN. > > I can Google myself but just asking in case you have 'the perfect > document'. > On 31 Mar 2015 10:57, "Jim Lawrence" wrote: > > > Hi John: > > > > If you are thinking of going Cloud based, it may be an idea to try SSL. > > OpenSSL is an excellent super secure VPN. It works on both Linux and > there > > is now a Windows version. I use Linux version all the time to connect to > my > > servers from anywhere. > > > > https://www.youtube.com/watch?v=FZyUX-LZHts > > > > Jim > > > > ----- Original Message ----- > > From: "John W. Colby" > > To: "Discussion concerning MS SQL Server" < > > dba-sqlserver at databaseadvisors.com>, jwcolby at gmail.com > > Sent: Monday, March 30, 2015 9:40:14 AM > > Subject: Re: [dba-SQLServer] Schemas for separation of clients? > > > > I am actually talking about other databases, not the "database from > hell". > > > > I have been following your AWS thread with interest. How do you get the > > public IP address? I would think that would make the speeds much better > > than trying to tunnel in using Hamachi. How do you deal with security / > > hack attempts? Having that public facing IP has always put me off. > > > > John W. Colby > > > > On 3/30/2015 10:47 AM, Gustav Brock wrote: > > > Hi John > > > > > > I have just set up a micro instance at AWS hosting SQL Server 2008 > > Express and a public IP address. > > > Then I can attach it directly via ODBC. > > > We have only a 15 Mbit/s download, so speed is slower than from our > > in-house SQL Servers but fully acceptable. > > > > > > At takes a little to set up the access to AWS. I skipped the VPN > > offering but I may add that later when I find out how to do it. > > > > > > My need is very far from yours with a maximum record count per table of > > some hundred thousands so I may never meet the issues you are dealing > with. > > > > > > /gustav > > > > > > -----Oprindelig meddelelse----- > > > Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com] > > P? vegne af John W. Colby > > > Sendt: 30. marts 2015 15:35 > > > Til: Discussion concerning MS SQL Server; jwcolby at gmail.com > > > Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > > > > > Are you guys hitting the BE with an Access FE? If so how are you > > linking to the tables? > > > > > > I am running Access linked tables over the internet to SQL Server on a > > privately hosted VM / SQL Server. It runs just fine, though a bit slow. > > OK very slow given how I design things. But it does work. In order to do > > it however I set up a single user / password out in SQL Server, then come > > into the VM using Hamachi. I think that Hamachi is one of the causes of > > the slowness, though since that is the only way in for me, it is tough to > > know exactly. > > > > > > Just wondering what you are up to and how to implement it. > > > > > > John W. Colby > > > > > > On 3/30/2015 9:18 AM, Gustav Brock wrote: > > >> Hi Mark > > >> > > >> Perfect. I missed that. I have a t2.micro instance running now with > > public access. > > >> > > >> I wonder if I should set up a VPN connection? It seems quite > > straight-forward to at the AWS site but may create some challenges at my > > site. > > >> > > >> /gustav > > >> > > >> -----Oprindelig meddelelse----- > > >> Fra: dba-SQLServer [mailto:dba-sqlserver-bounces at databaseadvisors.com > ] > > >> P? vegne af Mark Breen > > >> Sendt: 30. marts 2015 11:31 > > >> Til: Discussion concerning MS SQL Server > > >> Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > >> > > >> Hello Gustav > > >> > > >> I have played with RDS but mostly I use ec2. For you I was suggesting > > ec2 and SQL express. Then you can have multiple dB's. > > >> > > >> Mark > > >> On 30 Mar 2015 16:28, "Gustav Brock" wrote: > > >> > > >>> Hi Mark > > >>> > > >>> Interesting. We've used AWS for years, but for storage only, so I was > > >>> not up-to-date with their RDS offerings. It seems like it could be > > >>> well suited for my purpose. > > >>> > > >>> I had to update our login options, and that seems for some reason to > > >>> be a major task as they claim it can take up to 24 hours before > > >>> settled - and until then no RDS service. I have to turn on some > > patience ... > > >>> > > >>> /gustav > > >>> > > >>> -----Oprindelig meddelelse----- > > >>> Fra: dba-SQLServer > > >>> [mailto:dba-sqlserver-bounces at databaseadvisors.com] > > >>> P? vegne af Mark Breen > > >>> Sendt: 30. marts 2015 05:46 > > >>> Til: Discussion concerning MS SQL Server > > >>> Emne: Re: [dba-SQLServer] Schemas for separation of clients? > > >>> > > >>> Hello Gustav > > >>> > > >>> I have been using aws for two years now and love it. Amazon have > > >>> some new micro machines (t2) that are really cheap and yet have some > > power. > > >>> Their costs are low enough yo consider for low budget projects yet > > >>> offer all the quality of true best in class cloud. > > >>> > > >>> I usually do all my setting up on an enormous machine, then shutdown > > >>> and switch to a micro instance and restart. That way I get to deal > > >>> with great performance when in rdp and once I am done I pay pennies > > per day. > > >>> > > >>> For me the absolute cost is secondary to the almost 100% likelihood > > >>> my hardware will never fail. This reliability is what u am really > > buying. > > >>> > > >>> As an aside, I have automated all my daily backups and transferred > > >>> them all off machine to Amazon s3. > > >>> > > >>> Hth > > >>> Mark > > >>> On 30 Mar 2015 02:42, "Gustav Brock" wrote: > > >>> > > >>>> Hi Mark > > >>>> > > >>>> Good points. The added precautions and potential issues may very > > >>>> well not be more "expensive" than the little money saved. > > >>>> > > >>>> /gustav > > > > > > _______________________________________________ > > > dba-SQLServer mailing list > > > dba-SQLServer at databaseadvisors.com > > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > > http://www.databaseadvisors.com > > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Tue May 26 10:49:23 2015 From: fhtapia at gmail.com (fhtapia at gmail.com) Date: Tue, 26 May 2015 15:49:23 +0000 Subject: [dba-SQLServer] A couple of quick questions In-Reply-To: References: Message-ID: Borge, Sorry it's taken me a while to respond, but this link should be helpful #4 this one is relevant. http://www.mssqltips.com/sqlservertutorial/119/sql-server-point-in-time-restore/ note; if you had a full backup, a differential, and transaction log backups your process would be restore with norecovery on all your files up to the last transaction log file set that one up to stopat and the time you need. a full working demo: http://blog.sqlauthority.com/2011/12/23/sql-server-a-quick-script-for-point-in-time-recovery-back-up-and-restore/ Good Luck! Francisco On Tue, May 19, 2015 at 10:34 PM Borge Hansen wrote: > Hi all, > We are finally in the process of migrating an Access application backend > having been in production since 2000 to SQL Db backend. > At the same time we are upgrading the Frontend from Access2003 to > Access2010. > And the client is transferring from 2003 Terminal Server to 2012R2 Terminal > Server (I think they call it something else now other than Terminal > Server). > > We upsized to 2005 SQL Standard Edition Db using the Access2003 Upsizing > wizard and looking out for the various gotchas. > > The SQL Backend is now in production on existing 2003 Server > > It is about to be transferred to SQL 2014 Express on new 2012R2 Server > > A couple of questions: > > *1*. > Each table has lots of extended table properties - probably carried over > from Access. > Is there any need to keep these extended properties? > > *2*. > I backed up a version of the 2005SQL production Db and restored on SQL2014 > Express as a development / staging Db. > > In Db general properties the Db is reported as being 1390.31Mb in size with > 80.61Mb available. > The .mdf file is report as being 780Mb in initial size; > The .ldf file as being 612Mb in initial size. > > I changed the auto growth from default 10Mb for the .mdf and 10% for the > .ldf to 1,024Mb for each with maximum of 10,240Mb (10Gb - max size for a > SQL Express Db). > > An incremental growth of 1Gb - is that best setting? > Or what do you people suggest? > > *3*. > I did another full backup from the SQL 2005 Db using SSMS. The .bak file is > 727Mb. > > What happens to the .ldf log file during a full backup? > > At a high conceptual level I understand the function of the log file: It > helps restore a Db to the point of last committed transaction before a > crash by using a combination of the last full backup and the log file. > > When we do a full backup is the log file "reset" somehow or does it still > keep a lot of history information? > A log file can grow very big. > Some say to never shrink the log file... > What is the abc of dealing with / handling the log file? > > *4*. > Anyone got a link to a good step by step walkthrough of how to do a restore > using a full backup + existing log file to a specific point, i.e. just > before a Db crash. > > > Thanks, > /borge > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From pcs.accessd at gmail.com Wed May 27 00:34:44 2015 From: pcs.accessd at gmail.com (Borge Hansen) Date: Wed, 27 May 2015 13:34:44 +0800 Subject: [dba-SQLServer] A couple of quick questions In-Reply-To: References: Message-ID: Francisco: Thanks a lot! For anyone else who is faced with setting up automated backups on an SQL Server Express here is a reference to a straight forward how to article: How to schedule and automate backups of SQL Server databases in SQL Server Express https://support.microsoft.com/en-us/kb/2019698 and a powershell script of how to get rid of the old .bak files To delete all the .BAK files in a folder called C:\Scripts that are more than 7 days old http://blogs.technet.com/b/heyscriptingguy/archive/2006/11/17/how-can-i-delete-all-the-bak-files-in-a-folder-that-are-more-than-7-days-old.aspx /borge On Tue, May 26, 2015 at 11:49 PM, wrote: > Borge, > Sorry it's taken me a while to respond, but this link should be helpful > #4 > this one is relevant. > > > http://www.mssqltips.com/sqlservertutorial/119/sql-server-point-in-time-restore/ > > note; if you had a full backup, a differential, and transaction log backups > your process would be restore with norecovery on all your files up to the > last transaction log file set that one up to stopat and the time you need. > > a full working demo: > > http://blog.sqlauthority.com/2011/12/23/sql-server-a-quick-script-for-point-in-time-recovery-back-up-and-restore/ > > Good Luck! > Francisco > > On Tue, May 19, 2015 at 10:34 PM Borge Hansen > wrote: > > > Hi all, > > We are finally in the process of migrating an Access application backend > > having been in production since 2000 to SQL Db backend. > > At the same time we are upgrading the Frontend from Access2003 to > > Access2010. > > And the client is transferring from 2003 Terminal Server to 2012R2 > Terminal > > Server (I think they call it something else now other than Terminal > > Server). > > > > We upsized to 2005 SQL Standard Edition Db using the Access2003 Upsizing > > wizard and looking out for the various gotchas. > > > > The SQL Backend is now in production on existing 2003 Server > > > > It is about to be transferred to SQL 2014 Express on new 2012R2 Server > > > > A couple of questions: > > > > *1*. > > Each table has lots of extended table properties - probably carried over > > from Access. > > Is there any need to keep these extended properties? > > > > *2*. > > I backed up a version of the 2005SQL production Db and restored on > SQL2014 > > Express as a development / staging Db. > > > > In Db general properties the Db is reported as being 1390.31Mb in size > with > > 80.61Mb available. > > The .mdf file is report as being 780Mb in initial size; > > The .ldf file as being 612Mb in initial size. > > > > I changed the auto growth from default 10Mb for the .mdf and 10% for the > > .ldf to 1,024Mb for each with maximum of 10,240Mb (10Gb - max size for a > > SQL Express Db). > > > > An incremental growth of 1Gb - is that best setting? > > Or what do you people suggest? > > > > *3*. > > I did another full backup from the SQL 2005 Db using SSMS. The .bak file > is > > 727Mb. > > > > What happens to the .ldf log file during a full backup? > > > > At a high conceptual level I understand the function of the log file: It > > helps restore a Db to the point of last committed transaction before a > > crash by using a combination of the last full backup and the log file. > > > > When we do a full backup is the log file "reset" somehow or does it still > > keep a lot of history information? > > A log file can grow very big. > > Some say to never shrink the log file... > > What is the abc of dealing with / handling the log file? > > > > *4*. > > Anyone got a link to a good step by step walkthrough of how to do a > restore > > using a full backup + existing log file to a specific point, i.e. just > > before a Db crash. > > > > > > Thanks, > > /borge > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > >