From jwcolby at colbyconsulting.com Wed Sep 1 22:30:00 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 1 Sep 2004 23:30:00 -0400 Subject: [dba-SQLServer] VLDBs, the saga - Ramblings only In-Reply-To: Message-ID: <002b01c4909d$240acf90$80b3fea9@ColbyM6805> Charlotte, I intend to do some experiments and calculations. What do you mean by dimension tables? Are you talking about what we call lookup tables, i.e. a state table with a PK, replacing the state text with a FK relating back to the lookup table? I thought about this and can't decide whether the time to perform the join on 65 million records would completely outweigh any benefit. I will have to index some of these fields in order to do the slicing and dicing. Any true / false from what I know don't benefit from indexes since there are only two values. Any suggestions for how to speed this up are welcome. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Charlotte Foust Sent: Monday, August 30, 2004 11:44 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] VLDBs, the saga - Ramblings only >From what John has described, he has, in essence, a data warehouse. It could certainly be broken up into one or more fact tables with dimension tables to make the slicing and dicing easier, but you still wouldn't wind up with "normal" normalization, just 1NF tables. I've worked with this kind of data on a smaller scale (I used to work for a company that would have been a customer of that kind of list). I suspect the demographics they're using as selection criteria should probably be put into dimension tables, which will actually make the database bigger but will make the queries MUCH faster and easier. Charlotte Foust -----Original Message----- From: Shamil Salakhetdinov [mailto:shamil at users.mns.ru] Sent: Monday, August 30, 2004 6:49 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] VLDBs, the saga - Ramblings only John, I didn't know they are these simple T/F... Why your database is getting so huge then? Do you have any explanations/calculations? From artful at rogers.com Wed Sep 1 23:45:28 2004 From: artful at rogers.com (Arthur Fuller) Date: Thu, 2 Sep 2004 00:45:28 -0400 Subject: [dba-SQLServer] VLDBs, the saga - Ramblings only In-Reply-To: Message-ID: <001301c490a7$abea1740$6501a8c0@rock> Almost by definition a data warehouse is about as denormalized as you can get. Typically it is way bigger than the OLTP database from which it stems. It creates an un-normalized database from the central fact table and calculates all the totals for every dimension. The more dimensions, the larger the table, but the advantage is that everything is precalculated so nothing has to be derived ad-hoc. In the ideal world, you refresh the OLAP db every night when the managers are asleep and next morning the data is fresh again. Theoretically :) A. -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Charlotte Foust Sent: Monday, August 30, 2004 11:44 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] VLDBs, the saga - Ramblings only >From what John has described, he has, in essence, a data warehouse. It could certainly be broken up into one or more fact tables with dimension tables to make the slicing and dicing easier, but you still wouldn't wind up with "normal" normalization, just 1NF tables. I've worked with this kind of data on a smaller scale (I used to work for a company that would have been a customer of that kind of list). I suspect the demographics they're using as selection criteria should probably be put into dimension tables, which will actually make the database bigger but will make the queries MUCH faster and easier. Charlotte Foust -----Original Message----- From: Shamil Salakhetdinov [mailto:shamil at users.mns.ru] Sent: Monday, August 30, 2004 6:49 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] VLDBs, the saga - Ramblings only John, I didn't know they are these simple T/F... Why your database is getting so huge then? Do you have any explanations/calculations? > Further it is not helpful to normalize them. This is SO huge that putting > them back together again in queries joining a dozen or more fields / tables > would be a slow nightmare. This is incredibly quick in MS SQL.... And I'm not talking about "brute force" joining back... Of course all that my first thoughts/advises probably do not make sense now when I get known your source data are quite different from what I've thought about it before... > Even normalizing 600 fields would be many many > hours (days? Weeks?) of work. I don't think so - but to not get it wrong again I need to see some of your source data... > But in fact I think it may just be better left demoralized. Yes, very probably if they are these simple T/F. (Although I can't get then how they present so different I think habits and tastes of Americans...) Shamil ----- Original Message ----- From: "John W. Colby" To: Sent: Monday, August 30, 2004 4:37 PM Subject: RE: [dba-SQLServer] VLDBs, the saga - Ramblings only > Shamil, > > In fact the numbers don't look like that at all, plus I really didn't know > what the fields looked like. Looking at the data in 600 fields is > non-trivial all by itself. > > However I can tell you it isn't a straight calc like that. Many and > probably most of the fields are a simple true false which led me to expect a > MUCH smaller db size. They just have a T or a F in them (the character). > > Further it is not helpful to normalize them. This is SO huge that putting > them back together again in queries joining a dozen or more fields / tables > would be a slow nightmare. Even normalizing 600 fields would be many many > hours (days? Weeks?) of work. But in fact I think it may just be better > left denormalized. > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Shamil > Salakhetdinov > Sent: Monday, August 30, 2004 1:42 AM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] VLDBs, the saga - Ramblings only > > > John, > > May I ask you did you make any calculations in advance to see what to expect > with your data loading? (Almost nobody do I know - But I did start to do > that for some time now - it did happen to me that "brute force" approach > doesn't work well sometimes :) ) > > Why not make it normalized as a first step of your data loading adventure? > > I mean: > > - you have 65 million records with 600 fields each and let's assume that > each field is 20 bytes (not Unicode). Then you get 65millions*600*20 = 780GB > (even if the average size of a record is less than that - it for sure(?) > should be more than 4KB and therefore you get ONE record loaded on ONE page > - MS SQL records can't be longer than 8KB - this is ~520GB without > indexes...) > > If as the first step you go through all your source data and get them > normalized you get something like: > > 65millions*600*4 = 156GB - the latter looks manageable even with ordinary > modern IDE drive even connected through USB2 - that's a cheap and quick > enough solution for starters (I assume that some references from normalized > table will be 4 bytes long, some two bytes long, others 1 byte long, some > like First- , Middle- and Last- name will be left intact - so all that will > probably give 4 bytes long field in average. And four bytes are enough to > get referenced even 65,000,000 data dictionary/reference (65,000,000 = 03 > DF D2 40 hexadecimal). > > So as the first step - get all your source data unzipped on one HDD (160MB)? > > Second step - analyze the unzipped data and find what is the best way to get > it normalized (here is a simple utility reading source data, making hash > code of every field and calculating quantities of different hash codes for > every field should be not a bad idea - such a utility should be very quick > and reliable solution to get good approximation where you can get with your > huge volume of the source data, especially if you write in on C++/C#/VB.NET > - I'm getting in this field now - so I can help here just for fun to share > this your challenge but spare time is problem here so this help can be not > as quick as it maybe needed for you now... > > Third step - make good (semi-) normalized data model (don't forget a > clustered primary key - Identity - you like them I know :)), calculate well > what size it will get when it will be implemented in a MS SQL database... > > Fourth step - load normalized data, maybe in several steps.... .... N-th > step get all you data loaded and manageable - here is the point where you > can get it back denormalized if you will need that (I don't think it will be > needed/possible with your/your customer resources and the OLAP tasks should > work well on (semi-)normalized database metioned above), maybe as a set of > federated databases, linked databases, partitioned views ... > > I'd also advise you to read now carefully the "SQL Server Architecture" > chapter from BOL... > > Of course it easy to advice - and it's not that easy to go through the > challenge you have... > > I'm not that often here this days but I'm every working day online on MS > Messenger (~ 8:00 - 22:00 (MT+3) Shamil at Work) - so you can get me there if > you'll need some of my help... > > HTH & I hope you'll get it up&running soon, > Shamil > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Thu Sep 2 00:01:55 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Thu, 2 Sep 2004 01:01:55 -0400 Subject: [dba-SQLServer] VLDBs, the saga - Ramblings only In-Reply-To: <001301c490a7$abea1740$6501a8c0@rock> Message-ID: <003301c490a9$fb30f920$80b3fea9@ColbyM6805> And in fact I won't even need to recalc nightly since the data simply doesn't change unless I make updates or merges. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Thursday, September 02, 2004 12:45 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] VLDBs, the saga - Ramblings only Almost by definition a data warehouse is about as denormalized as you can get. Typically it is way bigger than the OLTP database from which it stems. It creates an un-normalized database from the central fact table and calculates all the totals for every dimension. The more dimensions, the larger the table, but the advantage is that everything is precalculated so nothing has to be derived ad-hoc. In the ideal world, you refresh the OLAP db every night when the managers are asleep and next morning the data is fresh again. Theoretically :) A. From cfoust at infostatsystems.com Thu Sep 2 10:55:43 2004 From: cfoust at infostatsystems.com (Charlotte Foust) Date: Thu, 2 Sep 2004 08:55:43 -0700 Subject: [dba-SQLServer] VLDBs, the saga - Ramblings only Message-ID: Dimension tables hold the keys that allow you to slice and dice data warehouse data. It's a little hard to get your head around the first time, but after that it makes perfect sense. Time is a fairly straightforward example. Your dimension table would contain probably a datetime field as the PK and for each date, it might also have a field for month number (overall), day number (overall), day in year, month in year, year, etc. Then you would join that table to each of the date fields in your fact tables and you could easily filter out records for the past 6 months or for dates in a range or where this date was 6 months earlier than that date. It sounds ideal for the data you're working with. Russell Sinclair has several books out on datawarehousing and I highly recommend them as primers. They also include working examples. Charlotte Foust -----Original Message----- From: John W. Colby [mailto:jwcolby at colbyconsulting.com] Sent: Wednesday, September 01, 2004 8:30 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] VLDBs, the saga - Ramblings only Charlotte, I intend to do some experiments and calculations. What do you mean by dimension tables? Are you talking about what we call lookup tables, i.e. a state table with a PK, replacing the state text with a FK relating back to the lookup table? I thought about this and can't decide whether the time to perform the join on 65 million records would completely outweigh any benefit. I will have to index some of these fields in order to do the slicing and dicing. Any true / false from what I know don't benefit from indexes since there are only two values. Any suggestions for how to speed this up are welcome. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Charlotte Foust Sent: Monday, August 30, 2004 11:44 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] VLDBs, the saga - Ramblings only >From what John has described, he has, in essence, a data warehouse. It could certainly be broken up into one or more fact tables with dimension tables to make the slicing and dicing easier, but you still wouldn't wind up with "normal" normalization, just 1NF tables. I've worked with this kind of data on a smaller scale (I used to work for a company that would have been a customer of that kind of list). I suspect the demographics they're using as selection criteria should probably be put into dimension tables, which will actually make the database bigger but will make the queries MUCH faster and easier. Charlotte Foust -----Original Message----- From: Shamil Salakhetdinov [mailto:shamil at users.mns.ru] Sent: Monday, August 30, 2004 6:49 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] VLDBs, the saga - Ramblings only John, I didn't know they are these simple T/F... Why your database is getting so huge then? Do you have any explanations/calculations? _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sat Sep 4 13:02:26 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 04 Sep 2004 14:02:26 -0400 Subject: [dba-SQLServer] Move database files from disk to disk In-Reply-To: Message-ID: <000901c492a9$5986a0a0$80b3fea9@ColbyM6805> I need to move a database from one disk to another, and can't remember how. I have disconnected my main (user) database and now need to move all the system stuff. John W. Colby www.ColbyConsulting.com From fhtapia at gmail.com Sat Sep 4 14:54:27 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sat, 4 Sep 2004 12:54:27 -0700 Subject: [dba-SQLServer] Move database files from disk to disk In-Reply-To: <000901c492a9$5986a0a0$80b3fea9@ColbyM6805> References: <000901c492a9$5986a0a0$80b3fea9@ColbyM6805> Message-ID: just modify your paths accordingly EXEC sp_attach_db @dbname = N'MyDB', @filename1 = N'd:\SqlServer\data\MyDB.mdf', @filename2 = N'f:\FastLogDisk\MyDB_log.ldf' On Sat, 04 Sep 2004 14:02:26 -0400, John W. Colby wrote: > I need to move a database from one disk to another, and can't remember how. > I have disconnected my main (user) database and now need to move all the > system stuff. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From jwcolby at colbyconsulting.com Sat Sep 4 15:06:27 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 04 Sep 2004 16:06:27 -0400 Subject: [dba-SQLServer] Move database files from disk to disk In-Reply-To: Message-ID: <000b01c492ba$ac7975b0$80b3fea9@ColbyM6805> No, I mean the entire shootin match. I "set up" sql server and failed to specify where I wanted the data files. I need to move everything, the system database etc. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Saturday, September 04, 2004 3:54 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Move database files from disk to disk just modify your paths accordingly EXEC sp_attach_db @dbname = N'MyDB', @filename1 = N'd:\SqlServer\data\MyDB.mdf', @filename2 = N'f:\FastLogDisk\MyDB_log.ldf' On Sat, 04 Sep 2004 14:02:26 -0400, John W. Colby wrote: > I need to move a database from one disk to another, and can't remember > how. I have disconnected my main (user) database and now need to move > all the system stuff. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Sat Sep 4 12:07:37 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Sat, 04 Sep 2004 10:07:37 -0700 Subject: [dba-SQLServer] Big db update In-Reply-To: <05C61C52D7CAD211A7830008C7DF6F1079BDD0@DISABILITYINS01> Message-ID: Hi John: Here is an article on warehousing that you may find useful. It gives a number of sample layouts, things to do and not to do and refernces: http://www.itu.dk/people/pagh/IDB/DW.pdf Notice the use of at least partial normalization to keep things manageable. HTH Jim From jwcolby at colbyconsulting.com Sat Sep 4 16:19:12 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 04 Sep 2004 17:19:12 -0400 Subject: [dba-SQLServer] Big db update In-Reply-To: Message-ID: <000f01c492c4$d5ca0b50$80b3fea9@ColbyM6805> Thanks, I'm reading up on this stuff. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Saturday, September 04, 2004 1:08 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Big db update Hi John: Here is an article on warehousing that you may find useful. It gives a number of sample layouts, things to do and not to do and refernces: http://www.itu.dk/people/pagh/IDB/DW.pdf Notice the use of at least partial normalization to keep things manageable. HTH Jim _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Sat Sep 4 23:22:22 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sat, 4 Sep 2004 21:22:22 -0700 Subject: [dba-SQLServer] Move database files from disk to disk In-Reply-To: <000b01c492ba$ac7975b0$80b3fea9@ColbyM6805> References: <000b01c492ba$ac7975b0$80b3fea9@ColbyM6805> Message-ID: I don't see where else to do that unless you do it upfront in the installation, I'm going to CC: another really good list on this and find out if anyone has any ideas (i'll reply to you if anything comes back. ---------- Forwarded message ---------- From: John W. Colby Date: Sat, 04 Sep 2004 16:06:27 -0400 Subject: RE: [dba-SQLServer] Move database files from disk to disk To: dba-sqlserver at databaseadvisors.com No, I mean the entire shootin match. I "set up" sql server and failed to specify where I wanted the data files. I need to move everything, the system database etc. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Saturday, September 04, 2004 3:54 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Move database files from disk to disk just modify your paths accordingly EXEC sp_attach_db @dbname = N'MyDB', @filename1 = N'd:\SqlServer\data\MyDB.mdf', @filename2 = N'f:\FastLogDisk\MyDB_log.ldf' On Sat, 04 Sep 2004 14:02:26 -0400, John W. Colby wrote: > I need to move a database from one disk to another, and can't remember > how. I have disconnected my main (user) database and now need to move > all the system stuff. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com -- -Francisco From jwcolby at colbyconsulting.com Sun Sep 5 01:34:56 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 02:34:56 -0400 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: Message-ID: <001801c49312$7a050420$80b3fea9@ColbyM6805> Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com From ebarro at afsweb.com Sun Sep 5 02:27:15 2004 From: ebarro at afsweb.com (Eric Barro) Date: Sun, 5 Sep 2004 00:27:15 -0700 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: <001801c49312$7a050420$80b3fea9@ColbyM6805> Message-ID: John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 5 09:06:33 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 10:06:33 -0400 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: Message-ID: <001b01c49351$8feaa490$80b3fea9@ColbyM6805> Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jmoss111 at bellsouth.net Sun Sep 5 09:26:27 2004 From: jmoss111 at bellsouth.net (JMoss) Date: Sun, 5 Sep 2004 09:26:27 -0500 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: <001b01c49351$8feaa490$80b3fea9@ColbyM6805> Message-ID: John, What the client is talking about is taking x characters from first name, x characters from last name, x characters from address1, x characters from city, x characters from state, and x characters from the zip code and creating a match key field from that. Other criteria could be added to ensure the uniqueness of the record. Then you could use something like http://www.codeproject.com/database/xp_md5.asp to build a hash key. Either key could be used for purposes of deduping. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:07 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jmoss111 at bellsouth.net Sun Sep 5 09:35:01 2004 From: jmoss111 at bellsouth.net (JMoss) Date: Sun, 5 Sep 2004 09:35:01 -0500 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: Message-ID: John, Some software parses the street number, then breaks the street name down into parts also. Doubletake from PeopleSmith Software breaks down names and addresses in this fashion... but its not cheap. -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of JMoss Sent: Sunday, September 05, 2004 9:26 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, What the client is talking about is taking x characters from first name, x characters from last name, x characters from address1, x characters from city, x characters from state, and x characters from the zip code and creating a match key field from that. Other criteria could be added to ensure the uniqueness of the record. Then you could use something like http://www.codeproject.com/database/xp_md5.asp to build a hash key. Either key could be used for purposes of deduping. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:07 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 5 09:44:59 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 10:44:59 -0400 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: Message-ID: <001c01c49356$ef382080$80b3fea9@ColbyM6805> MATCH FIELD! That's the name they called it. What I haven't discovered is whether the field is actually required or whether a unique index on all these fields prevents dupes and I'm done. It seems like an extra step to pull those X characters out, append them all together, then drop them in a new field. They insist that it is needed but they don't understand databases. I understand databases but I don't know their business. I suspect that this new "match field" is then indexed to prevent dupes. Is it used for anything else? Is it a standard definition (how many characters from which fields)? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Sunday, September 05, 2004 10:26 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, What the client is talking about is taking x characters from first name, x characters from last name, x characters from address1, x characters from city, x characters from state, and x characters from the zip code and creating a match key field from that. Other criteria could be added to ensure the uniqueness of the record. Then you could use something like http://www.codeproject.com/database/xp_md5.asp to build a hash key. Either key could be used for purposes of deduping. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:07 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 5 10:12:49 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 11:12:49 -0400 Subject: [dba-SQLServer] Optimizing nVLDB databases In-Reply-To: Message-ID: <001d01c4935a$d35577b0$80b3fea9@ColbyM6805> OK it's time to talk about how to speed things up. I now have a SQL Server machine with (4) 250g SATA data drives each containing one file from the database. The four files together form a single database with a single table which as of this morning contains 27 million records. Doing a count (*) from another machine took 18 minutes to count 24 million records. What can I do to speed up this count function? What can I do in general to speed up accessing the database? I am going to need to do cross tab type queries across all 65 million records to see how many people do or use X thing. I will also need to pull specific fields from all 65 million records WHERE some field .... There are 6000 fields (so far) and I just can't see indexing all 600 fields although indexes on select fields will be a necessity. Table scans on Boolean values is going to take FOREVER. The "Boolean" fields are currently nvar(50) fields holding a Y or N. Will it help to go through the database changing these fields to a different data type? Please be as specific (including instructions on how if at all possible) as you can since a general "do Y" will take me HOURS of research. I LOVE research, research is good for the soul, but I only have so many hours in the day and I'm trying to learn a lot in a short time. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Sunday, September 05, 2004 10:35 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Some software parses the street number, then breaks the street name down into parts also. Doubletake from PeopleSmith Software breaks down names and addresses in this fashion... but its not cheap. -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of JMoss Sent: Sunday, September 05, 2004 9:26 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, What the client is talking about is taking x characters from first name, x characters from last name, x characters from address1, x characters from city, x characters from state, and x characters from the zip code and creating a match key field from that. Other criteria could be added to ensure the uniqueness of the record. Then you could use something like http://www.codeproject.com/database/xp_md5.asp to build a hash key. Either key could be used for purposes of deduping. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:07 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Sun Sep 5 10:20:01 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sun, 5 Sep 2004 08:20:01 -0700 Subject: [dba-SQLServer] Fwd: Move database files from disk to disk In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Tom Cooper Date: Sun, 5 Sep 2004 10:01:49 -0400 Subject: RE: Move database files from disk to disk To: SQL Server 2k List KB article 224071 http://support.microsoft.com/default.aspx?scid=kb;EN-US;224071 explains how to move any database, both user and system dbs. Tom > -----Original Message----- > From: bounce-sql2k-6393174 at ls.sswug.org [mailto:bounce-sql2k- > 6393174 at ls.sswug.org] On Behalf Of Francisco Tapia > Sent: Sunday, September 05, 2004 12:22 AM > To: SQL Server 2k List > Subject: Move database files from disk to disk > > I don't see where else to do that unless you do it upfront in the > installation, I'm going to CC: another really good list on this and > find out if anyone has any ideas (i'll reply to you if anything comes > back. > > > ---------- Forwarded message ---------- > From: John W. Colby > Date: Sat, 04 Sep 2004 16:06:27 -0400 > Subject: RE: [dba-SQLServer] Move database files from disk to disk > To: dba-sqlserver at databaseadvisors.com > > No, I mean the entire shootin match. I "set up" sql server and failed to > specify where I wanted the data files. I need to move everything, the > system database etc. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco > Tapia > Sent: Saturday, September 04, 2004 3:54 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Move database files from disk to disk > > just modify your paths accordingly > > EXEC sp_attach_db @dbname = N'MyDB', > @filename1 = N'd:\SqlServer\data\MyDB.mdf', > @filename2 = N'f:\FastLogDisk\MyDB_log.ldf' > > On Sat, 04 Sep 2004 14:02:26 -0400, John W. Colby > wrote: > > I need to move a database from one disk to another, and can't remember > > how. I have disconnected my main (user) database and now need to move > > all the system stuff. > > ------------- See http://www.sswug.org/archives for list archives. mailto:listadmin at sswug.org with list issues. -- -Francisco From jwcolby at colbyconsulting.com Sun Sep 5 10:23:20 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 11:23:20 -0400 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: <001d01c4935a$d35577b0$80b3fea9@ColbyM6805> Message-ID: <001e01c4935c$498c6af0$80b3fea9@ColbyM6805> Is it my imagination or is SQL Server just not particularly friendly? Yea, I know, it is a BIG database and you are supposed to know what you are doing but still... For example, I need to truncate my log files. I am doing this BCP and three of the 4 BCPs I had running last night failed due to the log file running our of room. SIGH! Now I can find a help page on Truncating the log file, but nowhere does it actually say "do this". It seems that this would be something that needs to be done manually once in awhile if for no other reason than (like this case) to get back in operation quickly. So you would THINK there would be a "truncate log file" menu item somewhere. Ah yes, research is sooooo good for the soul. I have a TON of real (paying) work waiting to be done and I'm trying to figure out how to do a simple thing like truncate the log file! John W. Colby www.ColbyConsulting.com From fhtapia at gmail.com Sun Sep 5 10:44:16 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sun, 5 Sep 2004 08:44:16 -0700 Subject: [dba-SQLServer] Optimizing nVLDB databases In-Reply-To: <001d01c4935a$d35577b0$80b3fea9@ColbyM6805> References: <001d01c4935a$d35577b0$80b3fea9@ColbyM6805> Message-ID: Optimizing is one of the things I love most in Sql Server. :), You don't HAVE to index a lot of the fields to get the COUNT to move faster. If you open up your Query Analyzer and run a simple select such as SELECT COUNT(*) FROM tblMyTable, run it and then go back into the QUERY Menu and select Tunning wizard, it will offer some Indexing suggestions where you can accpet and apply them. Next... your T and F fields can be modified to appropriate data such as a BIT where the data is either 1 or null, or tinyint where you can get them to be 1 or 0. You'd append a column like So: ALTER TABLE tblAuthors ADD COLUMN columnTEST BIT After you build your test sprocs, you can re run the index tunning wizard along w/ the execution plan to find out what other fields should also be indexed or converted from nvarchar to bit/tinyint On Sun, 05 Sep 2004 11:12:49 -0400, John W. Colby wrote: > OK it's time to talk about how to speed things up. I now have a SQL Server > machine with (4) 250g SATA data drives each containing one file from the > database. The four files together form a single database with a single > table which as of this morning contains 27 million records. Doing a count > (*) from another machine took 18 minutes to count 24 million records. > > What can I do to speed up this count function? What can I do in general to > speed up accessing the database? I am going to need to do cross tab type > queries across all 65 million records to see how many people do or use X > thing. I will also need to pull specific fields from all 65 million records > WHERE some field .... > > There are 6000 fields (so far) and I just can't see indexing all 600 fields > although indexes on select fields will be a necessity. Table scans on > Boolean values is going to take FOREVER. The "Boolean" fields are currently > nvar(50) fields holding a Y or N. Will it help to go through the database > changing these fields to a different data type? > > Please be as specific (including instructions on how if at all possible) as > you can since a general "do Y" will take me HOURS of research. I LOVE > research, research is good for the soul, but I only have so many hours in > the day and I'm trying to learn a lot in a short time. > -- -Francisco From fhtapia at gmail.com Sun Sep 5 10:56:41 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sun, 5 Sep 2004 08:56:41 -0700 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: <001e01c4935c$498c6af0$80b3fea9@ColbyM6805> References: <001d01c4935a$d35577b0$80b3fea9@ColbyM6805> <001e01c4935c$498c6af0$80b3fea9@ColbyM6805> Message-ID: I know your rant on this one John, but the truth is that Truncating a log is something that must be done w/ care depending on how you want it to be done, the point to a Transaction log file is to keep an additional redundancy to the recently added/modified data. So you don't want to just blindingly truncating logs. Additionally, I think the mentality is diffrent when dealing w/ Sql Server than it is w/ something like Access.... Now it's not impossible for them to give you a gui tool to do this: (where DBName is your database name) BACKUP LOG DBName WITH TRUNCATE ONLY DBCC SHRINKFILE (DBName_Log, 10) the 10 is the size you want your newly truncated log to be the size of. If you simply just backup the log you will also "Clear checkpoints" and fully commit data from the log to the database. But running a point such as the above w/ TRUNCATE ONLY could be dangerous in a production environment because you "could" potentially loose data if some user were trying to append a series of large records at that point in time. I suspect that is why they don't just give you a GUI button for it. Because you know people, the moment you make something easier, they take that route instead of doing things the right way... On Sun, 05 Sep 2004 11:23:20 -0400, John W. Colby wrote: > Is it my imagination or is SQL Server just not particularly friendly? Yea, > I know, it is a BIG database and you are supposed to know what you are doing > but still... > > For example, I need to truncate my log files. I am doing this BCP and three > of the 4 BCPs I had running last night failed due to the log file running > our of room. SIGH! > > Now I can find a help page on Truncating the log file, but nowhere does it > actually say "do this". It seems that this would be something that needs to > be done manually once in awhile if for no other reason than (like this case) > to get back in operation quickly. So you would THINK there would be a > "truncate log file" menu item somewhere. > > Ah yes, research is sooooo good for the soul. I have a TON of real (paying) > work waiting to be done and I'm trying to figure out how to do a simple > thing like truncate the log file! > -- -Francisco From ebarro at afsweb.com Sun Sep 5 11:00:17 2004 From: ebarro at afsweb.com (Eric Barro) Date: Sun, 5 Sep 2004 09:00:17 -0700 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: <001e01c4935c$498c6af0$80b3fea9@ColbyM6805> Message-ID: John, You're not the only one who thinks that SQL server is not particularly friendly. Perhaps there's a reason behind that design... Anyway you've come across something that you'd think that SQL server should handle gracefully -- truncating log files. Here's how I dealt with a similar issue... Overview: You will need to restrict the growth of the transaction log file and then set up a backup schedule for the log file followed by a shrink log file operation. 1. Right click on the database and then go into Properties. 2. Click the Transaction Log tab 3. Make sure to restrict file growth to a manageable size where your drive won't run out disk space 4. Click Options tab 5. Check the Auto shrink checkbox. This will allow SQL server to automatically shrink your database and log files to the minumum required. 6. Click OK and apply all those changes. 7. Go into the Management section of EM. 8. Select Backup 9. Create a backup set for your log files. Select the db, check Transaction Log radio button. Select destination and check the schedule box and specify when you want it to run. Depending on the amount of activity and the maximum amount you specified for the log file to grow you will need to specify a scheduled backup and shrink operation that will suit your needs. On our production environment I had it set to 1 hr intervals. 10. Set up a scheduled shrink log file operation. I don't have the exact commands with me right now and I can't access the server from outside so I don't have the exact steps for you on this one. A little research might be good for the soul then... LOL! Hope this helps... --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 8:23 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Is it my imagination? Is it my imagination or is SQL Server just not particularly friendly? Yea, I know, it is a BIG database and you are supposed to know what you are doing but still... For example, I need to truncate my log files. I am doing this BCP and three of the 4 BCPs I had running last night failed due to the log file running our of room. SIGH! Now I can find a help page on Truncating the log file, but nowhere does it actually say "do this". It seems that this would be something that needs to be done manually once in awhile if for no other reason than (like this case) to get back in operation quickly. So you would THINK there would be a "truncate log file" menu item somewhere. Ah yes, research is sooooo good for the soul. I have a TON of real (paying) work waiting to be done and I'm trying to figure out how to do a simple thing like truncate the log file! John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From ebarro at afsweb.com Sun Sep 5 11:04:27 2004 From: ebarro at afsweb.com (Eric Barro) Date: Sun, 5 Sep 2004 09:04:27 -0700 Subject: [dba-SQLServer] Optimizing nVLDB databases In-Reply-To: Message-ID: John, In terms of the hardware side you will want read/writes to the log file and read/writes to the database to go to separate hardware controllers and disk sub-systems. SQL server keeps track of every transaction in the transaction log file so it makes sense to have that write to its own controller/drive sub-system. A second processor and lots of RAM (max out your RAM if you can) will help a lot. But of course we all know that a poorly designed db/application can bring any high end system to its knees. I've seen it happen too many times... --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Francisco Tapia Sent: Sunday, September 05, 2004 8:44 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Optimizing nVLDB databases Optimizing is one of the things I love most in Sql Server. :), You don't HAVE to index a lot of the fields to get the COUNT to move faster. If you open up your Query Analyzer and run a simple select such as SELECT COUNT(*) FROM tblMyTable, run it and then go back into the QUERY Menu and select Tunning wizard, it will offer some Indexing suggestions where you can accpet and apply them. Next... your T and F fields can be modified to appropriate data such as a BIT where the data is either 1 or null, or tinyint where you can get them to be 1 or 0. You'd append a column like So: ALTER TABLE tblAuthors ADD COLUMN columnTEST BIT After you build your test sprocs, you can re run the index tunning wizard along w/ the execution plan to find out what other fields should also be indexed or converted from nvarchar to bit/tinyint On Sun, 05 Sep 2004 11:12:49 -0400, John W. Colby wrote: > OK it's time to talk about how to speed things up. I now have a SQL Server > machine with (4) 250g SATA data drives each containing one file from the > database. The four files together form a single database with a single > table which as of this morning contains 27 million records. Doing a count > (*) from another machine took 18 minutes to count 24 million records. > > What can I do to speed up this count function? What can I do in general to > speed up accessing the database? I am going to need to do cross tab type > queries across all 65 million records to see how many people do or use X > thing. I will also need to pull specific fields from all 65 million records > WHERE some field .... > > There are 6000 fields (so far) and I just can't see indexing all 600 fields > although indexes on select fields will be a necessity. Table scans on > Boolean values is going to take FOREVER. The "Boolean" fields are currently > nvar(50) fields holding a Y or N. Will it help to go through the database > changing these fields to a different data type? > > Please be as specific (including instructions on how if at all possible) as > you can since a general "do Y" will take me HOURS of research. I LOVE > research, research is good for the soul, but I only have so many hours in > the day and I'm trying to learn a lot in a short time. > -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Sun Sep 5 11:13:26 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sun, 5 Sep 2004 09:13:26 -0700 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: References: <001e01c4935c$498c6af0$80b3fea9@ColbyM6805> Message-ID: I used to do that before, and is a good plan to keep, I currently use the following in a production db, back up the transaction logs when they reach 60% of their total size. here are the steps to do that: IN EM: Under the Management folder and under the SQL Server Agent icon click on Alerts and create a new Alert. Give your alert a meaningful name In the General tab: Type choose: Sql Server Performance condition alert (enabled) Object: SqlServer:Databases Counter: Percent Log Used Instance: MyDb Alert If Counter: rises above Value: 60 In the Response Tab (Check Execute Job) and create a job (the three ... dots) your job should have the following TSQL job for backup: BACKUP LOG [MyDB] TO [LogBackupDeviceName] WITH INIT On Sun, 5 Sep 2004 09:00:17 -0700, Eric Barro wrote: > John, > > You're not the only one who thinks that SQL server is not particularly friendly. Perhaps there's a reason behind that design... > > Anyway you've come across something that you'd think that SQL server should handle gracefully -- truncating log files. > > Here's how I dealt with a similar issue... > > Overview: You will need to restrict the growth of the transaction log file and then set up a backup schedule for the log file followed by a shrink log file operation. > > 1. Right click on the database and then go into Properties. > 2. Click the Transaction Log tab > 3. Make sure to restrict file growth to a manageable size where your drive won't run out disk space > 4. Click Options tab > 5. Check the Auto shrink checkbox. This will allow SQL server to automatically shrink your database and log files to the minumum required. > 6. Click OK and apply all those changes. > 7. Go into the Management section of EM. > 8. Select Backup > 9. Create a backup set for your log files. Select the db, check Transaction Log radio button. Select destination and check the schedule box and specify when you want it to run. Depending on the amount of activity and the maximum amount you specified for the log file to grow you will need to specify a scheduled backup and shrink operation that will suit your needs. On our production environment I had it set to 1 hr intervals. > 10. Set up a scheduled shrink log file operation. I don't have the exact commands with me right now and I can't access the server from outside so I don't have the exact steps for you on this one. A little research might be good for the soul then... LOL! > > Hope this helps... > > --- > Eric Barro > Senior Systems Analyst > Advanced Field Services > (208) 772-7060 > http://www.afsweb.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. > Colby > Sent: Sunday, September 05, 2004 8:23 AM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Is it my imagination? > > Is it my imagination or is SQL Server just not particularly friendly? Yea, > I know, it is a BIG database and you are supposed to know what you are doing > but still... > > For example, I need to truncate my log files. I am doing this BCP and three > of the 4 BCPs I had running last night failed due to the log file running > our of room. SIGH! > > Now I can find a help page on Truncating the log file, but nowhere does it > actually say "do this". It seems that this would be something that needs to > be done manually once in awhile if for no other reason than (like this case) > to get back in operation quickly. So you would THINK there would be a > "truncate log file" menu item somewhere. > > Ah yes, research is sooooo good for the soul. I have a TON of real (paying) > work waiting to be done and I'm trying to figure out how to do a simple > thing like truncate the log file! > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From jwcolby at colbyconsulting.com Sun Sep 5 11:20:26 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 12:20:26 -0400 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: Message-ID: <001f01c49364$439cf8f0$80b3fea9@ColbyM6805> Francisco, I appreciate all you assistance in this stuff. Your patience and knowledge are a godsend, and on SUNDAY at that!!! 8-) So how do I run this backup? Don't worry, I am heading out to buy a SQL Server 2K admin book which will hopefully assist me in all this stuff. In the meantime I need to get this log truncated so that I can start the imports back up before I go. >BACKUP LOG DBName WITH TRUNCATE ONLY Is this run in the SQL query window (as a query)? Do I need to somewhere tell SQL Server where the backup files are going to go? >DBCC SHRINKFILE (DBName_Log, 10) Do I need to do the shrink? I am going to continue the import which will grow the container again. Is the space reused if I don't shrink it or is it like Access where the file just adds NEW space and ignores old empty space. The help seems to indicate that the freed up empty space will be reused. While I have your attention... The following is what I see in the BOL for truncate. Truncate Method The Truncate method archive-marks transaction log records. Applies ToTransactionLog Object Syntax object.Truncate( ) Parts object Expression that evaluates to an object in the Applies To list How do I "call" this code. It appears that object is an object to be called from code. Can I do this from Access (since that is my comfort zone)? Do I have to reference this thing somehow. IOW What the h&^% is this and how do I use it? I am an accomplished programmer (or like to think so) but I haven't a clue how to get started with this stuff. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Sunday, September 05, 2004 11:57 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Is it my imagination? I know your rant on this one John, but the truth is that Truncating a log is something that must be done w/ care depending on how you want it to be done, the point to a Transaction log file is to keep an additional redundancy to the recently added/modified data. So you don't want to just blindingly truncating logs. Additionally, I think the mentality is diffrent when dealing w/ Sql Server than it is w/ something like Access.... Now it's not impossible for them to give you a gui tool to do this: (where DBName is your database name) BACKUP LOG DBName WITH TRUNCATE ONLY DBCC SHRINKFILE (DBName_Log, 10) the 10 is the size you want your newly truncated log to be the size of. If you simply just backup the log you will also "Clear checkpoints" and fully commit data from the log to the database. But running a point such as the above w/ TRUNCATE ONLY could be dangerous in a production environment because you "could" potentially loose data if some user were trying to append a series of large records at that point in time. I suspect that is why they don't just give you a GUI button for it. Because you know people, the moment you make something easier, they take that route instead of doing things the right way... On Sun, 05 Sep 2004 11:23:20 -0400, John W. Colby wrote: > Is it my imagination or is SQL Server just not particularly friendly? > Yea, I know, it is a BIG database and you are supposed to know what > you are doing but still... > > For example, I need to truncate my log files. I am doing this BCP and > three of the 4 BCPs I had running last night failed due to the log > file running our of room. SIGH! > > Now I can find a help page on Truncating the log file, but nowhere > does it actually say "do this". It seems that this would be something > that needs to be done manually once in awhile if for no other reason > than (like this case) to get back in operation quickly. So you would > THINK there would be a "truncate log file" menu item somewhere. > > Ah yes, research is sooooo good for the soul. I have a TON of real > (paying) work waiting to be done and I'm trying to figure out how to > do a simple thing like truncate the log file! > -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Sun Sep 5 11:48:40 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sun, 5 Sep 2004 09:48:40 -0700 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: <001f01c49364$439cf8f0$80b3fea9@ColbyM6805> References: <001f01c49364$439cf8f0$80b3fea9@ColbyM6805> Message-ID: See Inline: On Sun, 05 Sep 2004 12:20:26 -0400, John W. Colby wrote: > Francisco, > > I appreciate all you assistance in this stuff. Your patience and knowledge > are a godsend, and on SUNDAY at that!!! 8-) thanks, and to think I'm just avoiding yardwork, by staying in a little later (my excuse is that I don't want to wake the neighbors, but now it's 930 and i'm gonna have to soon break out w/ my yard boots :(. in your program group for Sql Server there is an Icon labeld Query Analyzer. It is a very very robust tool. I like the Sql 2000 version wich also came out w/ color coding for all the TSQL code, I hear rumors that the yukon one will also feature intellisense, but I hope they are not just rumors :D. > So how do I run this backup? Don't worry, I am heading out to buy a SQL > Server 2K admin book which will hopefully assist me in all this stuff. In > the meantime I need to get this log truncated so that I can start the > imports back up before I go. > > >BACKUP LOG DBName WITH TRUNCATE ONLY > > Is this run in the SQL query window (as a query)? Do I need to somewhere > tell SQL Server where the backup files are going to go? > When you open up Query Analyzer you can hit the F8 key if you don't already get the object browser poping up in the Left side. If you are NOT trying to keep your log file shrunk, but just want to keep it from growing anymore, or for that matter like to maintain it at a smaller size. Then I suggest you go via "EM" (Enterprise Manager) right click the database properties and click on the transaction log tab and restrict the growth of your transaction log, then you'd want to set up an alert that would auto-backup/truncate the log to prevent more growth. I can repost the instructions on this again if you want to do this > >DBCC SHRINKFILE (DBName_Log, 10) > > Do I need to do the shrink? I am going to continue the import which will > grow the container again. Is the space reused if I don't shrink it or is it > like Access where the file just adds NEW space and ignores old empty space. > The help seems to indicate that the freed up empty space will be reused. Since you're going to keep re-using the space and feel comfortable w/ a transaction log that big you can leave it as is, or convert the size to something more managable. I can re-post that sample on how to setup an Alert in Sql Server to auto-backup your transaction log for this type of task. also, If you set up a typical BACKUP process for your database, then the Transaction log and all it's checkpoints are automatically cleared. The fact that it has not been happening on it's own right now suggests that you have not gotten to the point where you've backed the database up. > While I have your attention... > > The following is what I see in the BOL for truncate. > > Truncate Method > The Truncate method archive-marks transaction log records. > > Applies ToTransactionLog Object > > Syntax > object.Truncate( ) > > Parts > object > > Expression that evaluates to an object in the Applies To list > > How do I "call" this code. It appears that object is an object to be called > from code. Can I do this from Access (since that is my comfort zone)? Do I > have to reference this thing somehow. IOW What the h&^% is this and how do > I use it? I am an accomplished programmer (or like to think so) but I > haven't a clue how to get started with this stuff. > It's what's known as SQL-DMO code. You add the sql-dmo refrence to your Access Project (MDB or ADP) and proceed to use it in code as such. If you have not downloaded the Books OnLine SP3, you ought to there is some revised samples and such that will help you tremendously... also a TIP, yes a TIP since you stuck w/ the thread till the bottom of the email ;-) When using QA (Query Analyzer) and require help on a keyword simply highlight or put your cursor over the keyword and hit CTRL+F1 and BOL (Books online) will automatically open up w/ a pre-search to that keyword. -- -Francisco From jwcolby at colbyconsulting.com Sun Sep 5 12:23:24 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 13:23:24 -0400 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: Message-ID: <002301c4936d$0fa75780$80b3fea9@ColbyM6805> >also a TIP, yes a TIP since you stuck w/ the thread till the bottom of the email ;-) When using QA (Query Analyzer) and require help on a keyword simply highlight or put your cursor over the keyword and hit CTRL+F1 and BOL (Books online) will automatically open up w/ a pre-search to that keyword. COOL. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Sunday, September 05, 2004 12:49 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Is it my imagination? See Inline: On Sun, 05 Sep 2004 12:20:26 -0400, John W. Colby wrote: > Francisco, > > I appreciate all you assistance in this stuff. Your patience and > knowledge are a godsend, and on SUNDAY at that!!! 8-) thanks, and to think I'm just avoiding yardwork, by staying in a little later (my excuse is that I don't want to wake the neighbors, but now it's 930 and i'm gonna have to soon break out w/ my yard boots :(. in your program group for Sql Server there is an Icon labeld Query Analyzer. It is a very very robust tool. I like the Sql 2000 version wich also came out w/ color coding for all the TSQL code, I hear rumors that the yukon one will also feature intellisense, but I hope they are not just rumors :D. > So how do I run this backup? Don't worry, I am heading out to buy a > SQL Server 2K admin book which will hopefully assist me in all this > stuff. In the meantime I need to get this log truncated so that I can > start the imports back up before I go. > > >BACKUP LOG DBName WITH TRUNCATE ONLY > > Is this run in the SQL query window (as a query)? Do I need to > somewhere tell SQL Server where the backup files are going to go? > When you open up Query Analyzer you can hit the F8 key if you don't already get the object browser poping up in the Left side. If you are NOT trying to keep your log file shrunk, but just want to keep it from growing anymore, or for that matter like to maintain it at a smaller size. Then I suggest you go via "EM" (Enterprise Manager) right click the database properties and click on the transaction log tab and restrict the growth of your transaction log, then you'd want to set up an alert that would auto-backup/truncate the log to prevent more growth. I can repost the instructions on this again if you want to do this > >DBCC SHRINKFILE (DBName_Log, 10) > > Do I need to do the shrink? I am going to continue the import which > will grow the container again. Is the space reused if I don't shrink > it or is it like Access where the file just adds NEW space and ignores > old empty space. The help seems to indicate that the freed up empty > space will be reused. Since you're going to keep re-using the space and feel comfortable w/ a transaction log that big you can leave it as is, or convert the size to something more managable. I can re-post that sample on how to setup an Alert in Sql Server to auto-backup your transaction log for this type of task. also, If you set up a typical BACKUP process for your database, then the Transaction log and all it's checkpoints are automatically cleared. The fact that it has not been happening on it's own right now suggests that you have not gotten to the point where you've backed the database up. > While I have your attention... > > The following is what I see in the BOL for truncate. > > Truncate Method > The Truncate method archive-marks transaction log records. > > Applies ToTransactionLog Object > > Syntax > object.Truncate( ) > > Parts > object > > Expression that evaluates to an object in the Applies To list > > How do I "call" this code. It appears that object is an object to be > called from code. Can I do this from Access (since that is my comfort > zone)? Do I have to reference this thing somehow. IOW What the h&^% > is this and how do I use it? I am an accomplished programmer (or like > to think so) but I haven't a clue how to get started with this stuff. > It's what's known as SQL-DMO code. You add the sql-dmo refrence to your Access Project (MDB or ADP) and proceed to use it in code as such. If you have not downloaded the Books OnLine SP3, you ought to there is some revised samples and such that will help you tremendously... also a TIP, yes a TIP since you stuck w/ the thread till the bottom of the email ;-) When using QA (Query Analyzer) and require help on a keyword simply highlight or put your cursor over the keyword and hit CTRL+F1 and BOL (Books online) will automatically open up w/ a pre-search to that keyword. -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From ebarro at afsweb.com Sun Sep 5 13:55:21 2004 From: ebarro at afsweb.com (Eric Barro) Date: Sun, 5 Sep 2004 11:55:21 -0700 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: <001f01c49364$439cf8f0$80b3fea9@ColbyM6805> Message-ID: >Francisco, >I appreciate all you assistance in this stuff. Your patience and knowledge are a godsend, and on SUNDAY at that!!! 8-) Hey...how about me? LOL! --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:20 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Is it my imagination? Francisco, I appreciate all you assistance in this stuff. Your patience and knowledge are a godsend, and on SUNDAY at that!!! 8-) From martyconnelly at shaw.ca Sun Sep 5 15:42:31 2004 From: martyconnelly at shaw.ca (MartyConnelly) Date: Sun, 05 Sep 2004 13:42:31 -0700 Subject: [dba-SQLServer] Is it my imagination? References: <001f01c49364$439cf8f0$80b3fea9@ColbyM6805> Message-ID: <413B7A37.8050803@shaw.ca> Here is a good SQL FAQ. Needs registration and you may have to remove quotes surrounding some of the lowerlevel url's in your IE address bar. http://www.sqlservercentral.com/faq/listfaq.asp?categoryid=2 John W. Colby wrote: >Francisco, > >I appreciate all you assistance in this stuff. Your patience and knowledge >are a godsend, and on SUNDAY at that!!! 8-) > >So how do I run this backup? Don't worry, I am heading out to buy a SQL >Server 2K admin book which will hopefully assist me in all this stuff. In >the meantime I need to get this log truncated so that I can start the >imports back up before I go. > > > >>BACKUP LOG DBName WITH TRUNCATE ONLY >> >> > >Is this run in the SQL query window (as a query)? Do I need to somewhere >tell SQL Server where the backup files are going to go? > > > >>DBCC SHRINKFILE (DBName_Log, 10) >> >> > >Do I need to do the shrink? I am going to continue the import which will >grow the container again. Is the space reused if I don't shrink it or is it >like Access where the file just adds NEW space and ignores old empty space. >The help seems to indicate that the freed up empty space will be reused. > >While I have your attention... > >The following is what I see in the BOL for truncate. > > >Truncate Method >The Truncate method archive-marks transaction log records. > >Applies ToTransactionLog Object > >Syntax >object.Truncate( ) > >Parts >object > >Expression that evaluates to an object in the Applies To list > >How do I "call" this code. It appears that object is an object to be called >from code. Can I do this from Access (since that is my comfort zone)? Do I >have to reference this thing somehow. IOW What the h&^% is this and how do >I use it? I am an accomplished programmer (or like to think so) but I >haven't a clue how to get started with this stuff. > >John W. Colby >www.ColbyConsulting.com > >-----Original Message----- >From: dba-sqlserver-bounces at databaseadvisors.com >[mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco >Tapia >Sent: Sunday, September 05, 2004 11:57 AM >To: dba-sqlserver at databaseadvisors.com >Subject: Re: [dba-SQLServer] Is it my imagination? > > >I know your rant on this one John, but the truth is that Truncating a log is >something that must be done w/ care depending on how you want it to be done, >the point to a Transaction log file is to keep an additional redundancy to >the recently added/modified data. So you don't want to just blindingly >truncating logs. Additionally, I think the mentality is diffrent when >dealing w/ Sql Server than it is w/ something like Access.... Now it's not >impossible for them to give you a gui tool to do this: (where DBName is your >database name) > >BACKUP LOG DBName WITH TRUNCATE ONLY >DBCC SHRINKFILE (DBName_Log, 10) > >the 10 is the size you want your newly truncated log to be the size of. If >you simply just backup the log you will also "Clear checkpoints" and fully >commit data from the log to the database. But running a point such as the >above w/ TRUNCATE ONLY could be dangerous in a production environment >because you "could" potentially loose data if some user were trying to >append a series of large records at that point in time. I suspect that is >why they don't just give you a GUI button for it. Because you know people, >the moment you make something easier, they take that route instead of doing >things the right way... > > >On Sun, 05 Sep 2004 11:23:20 -0400, John W. Colby > wrote: > > >>Is it my imagination or is SQL Server just not particularly friendly? >>Yea, I know, it is a BIG database and you are supposed to know what >>you are doing but still... >> >> >> >> > > For example, I need to truncate my log files. I am doing this BCP and > > >>three of the 4 BCPs I had running last night failed due to the log >>file running our of room. SIGH! >> >>Now I can find a help page on Truncating the log file, but nowhere >>does it actually say "do this". It seems that this would be something >>that needs to be done manually once in awhile if for no other reason >>than (like this case) to get back in operation quickly. So you would >>THINK there would be a "truncate log file" menu item somewhere. >> >>Ah yes, research is sooooo good for the soul. I have a TON of real >>(paying) work waiting to be done and I'm trying to figure out how to >>do a simple thing like truncate the log file! >> >> >> > > > > > -- Marty Connelly Victoria, B.C. Canada From jwcolby at colbyconsulting.com Sun Sep 5 15:57:02 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 16:57:02 -0400 Subject: [dba-SQLServer] Is it my imagination? In-Reply-To: Message-ID: <000b01c4938a$e7ce1870$80b3fea9@ColbyM6805> Eric, No slight intended, and yes, you are here on Sunday as well! In fact thanks to a whole ton of people who have been responding to my rather vague pleas for help. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 2:55 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Is it my imagination? >Francisco, >I appreciate all you assistance in this stuff. Your patience and >knowledge are a godsend, and on SUNDAY at that!!! 8-) Hey...how about me? LOL! --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:20 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Is it my imagination? Francisco, I appreciate all you assistance in this stuff. Your patience and knowledge are a godsend, and on SUNDAY at that!!! 8-) _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Sun Sep 5 17:25:49 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 06 Sep 2004 08:25:49 +1000 Subject: [dba-SQLServer] Optimizing nVLDB databases In-Reply-To: <001d01c4935a$d35577b0$80b3fea9@ColbyM6805> References: Message-ID: <413C1F0D.1375.96296BE@lexacorp.com.pg> On 5 Sep 2004 at 11:12, John W. Colby wrote: > OK it's time to talk about how to speed things up. I now have a SQL Server > machine with (4) 250g SATA data drives each containing one file from the > database. The four files together form a single database with a single > table which as of this morning contains 27 million records. Doing a count > (*) from another machine took 18 minutes to count 24 million records. > > What can I do to speed up this count function? Repeat from my posting in this list pm Sat 21 Aug 04. There is another way to determine the total row count in a table. You can use the sysindexes system table for this purpose. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of above one: SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2 -- Stuart From stuart at lexacorp.com.pg Sun Sep 5 17:47:29 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 06 Sep 2004 08:47:29 +1000 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, In-Reply-To: <001c01c49356$ef382080$80b3fea9@ColbyM6805> References: Message-ID: <413C2421.4009.9766EE2@lexacorp.com.pg> On 5 Sep 2004 at 10:44, John W. Colby wrote: > MATCH FIELD! That's the name they called it. What I haven't discovered is > whether the field is actually required or whether a unique index on all > these fields prevents dupes and I'm done. It seems like an extra step to > pull those X characters out, append them all together, then drop them in a > new field. They insist that it is needed but they don't understand > databases. I understand databases but I don't know their business. > > I suspect that this new "match field" is then indexed to prevent dupes. Is > it used for anything else? Is it a standard definition (how many characters > from which fields)? > I doubt that there is a standard. "Match field" seems to be to just be another name for "natural key" (At least a qick scan of the some googles on this suggest so) If you create a field which contains a hash derived from x characters from a number of fields (where x varies according to the field , but is less than the maximum record length) and index that, it will make the index file smaller than indexing the fields directly and should be faster than a multi field index. "Match Field v Natural Key" looks to me like one of those arguments where the best solution depends on the type of data in use and the specifics of the db engine being used. -- Stuart From jwcolby at colbyconsulting.com Sun Sep 5 18:05:59 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 19:05:59 -0400 Subject: [dba-SQLServer] Optimizing nVLDB databases In-Reply-To: <413C1F0D.1375.96296BE@lexacorp.com.pg> Message-ID: <000001c4939c$eba61d50$80b3fea9@ColbyM6805> That is a time saver! Thanks, John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Stuart McLachlan Sent: Sunday, September 05, 2004 6:26 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Optimizing nVLDB databases On 5 Sep 2004 at 11:12, John W. Colby wrote: > OK it's time to talk about how to speed things up. I now have a SQL > Server machine with (4) 250g SATA data drives each containing one file > from the database. The four files together form a single database > with a single table which as of this morning contains 27 million > records. Doing a count > (*) from another machine took 18 minutes to count 24 million records. > > What can I do to speed up this count function? Repeat from my posting in this list pm Sat 21 Aug 04. There is another way to determine the total row count in a table. You can use the sysindexes system table for this purpose. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of above one: SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2 -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 5 18:10:43 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 19:10:43 -0400 Subject: [dba-SQLServer] Optimizing nVLDB databases In-Reply-To: <413C1F0D.1375.96296BE@lexacorp.com.pg> Message-ID: <000101c4939d$92991130$80b3fea9@ColbyM6805> >SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2 When I did this, it didn't give an error but it returned 0 rows. What is indid? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Stuart McLachlan Sent: Sunday, September 05, 2004 6:26 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Optimizing nVLDB databases On 5 Sep 2004 at 11:12, John W. Colby wrote: > OK it's time to talk about how to speed things up. I now have a SQL > Server machine with (4) 250g SATA data drives each containing one file > from the database. The four files together form a single database > with a single table which as of this morning contains 27 million > records. Doing a count > (*) from another machine took 18 minutes to count 24 million records. > > What can I do to speed up this count function? Repeat from my posting in this list pm Sat 21 Aug 04. There is another way to determine the total row count in a table. You can use the sysindexes system table for this purpose. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of above one: SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2 -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 5 18:18:31 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 19:18:31 -0400 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, In-Reply-To: <413C2421.4009.9766EE2@lexacorp.com.pg> Message-ID: <000201c4939e$abe03550$80b3fea9@ColbyM6805> OK, I can understand that logic. The time up front to build the index is outweighed in the time it takes to find an item based on the match key. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Stuart McLachlan Sent: Sunday, September 05, 2004 6:47 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question, On 5 Sep 2004 at 10:44, John W. Colby wrote: > MATCH FIELD! That's the name they called it. What I haven't > discovered is whether the field is actually required or whether a > unique index on all these fields prevents dupes and I'm done. It > seems like an extra step to pull those X characters out, append them > all together, then drop them in a new field. They insist that it is > needed but they don't understand databases. I understand databases > but I don't know their business. > > I suspect that this new "match field" is then indexed to prevent > dupes. Is it used for anything else? Is it a standard definition > (how many characters from which fields)? > I doubt that there is a standard. "Match field" seems to be to just be another name for "natural key" (At least a qick scan of the some googles on this suggest so) If you create a field which contains a hash derived from x characters from a number of fields (where x varies according to the field , but is less than the maximum record length) and index that, it will make the index file smaller than indexing the fields directly and should be faster than a multi field index. "Match Field v Natural Key" looks to me like one of those arguments where the best solution depends on the type of data in use and the specifics of the db engine being used. -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 5 19:37:26 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 20:37:26 -0400 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: <000201c4939e$abe03550$80b3fea9@ColbyM6805> Message-ID: <000401c493a9$b2448e90$80b3fea9@ColbyM6805> I assume that the quotes are a valid strategy to allow commas to be embedded in the data, i.e. if the comma is inside a pair of quotes it is data, if it is outside, it is a field delimiter? Argh. Sigh. Beats head against sharp corner of filing cabinet. John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Sun Sep 5 19:34:33 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 20:34:33 -0400 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <000201c4939e$abe03550$80b3fea9@ColbyM6805> Message-ID: <000301c493a9$4b64cd70$80b3fea9@ColbyM6805> Back when I first started this data import I was using DTS to import all the text files. I subsequently switched to using bcp. It turns out that the data inside the raw text file has all fields in the format: "Some Data", "Some more data", "still some more data" Etc. Notice the quotes. It appears that dts ignored the quotes(?) and that bcp didn't. I now have 27 million records in my database with quotes around each data item. Argh! Can this be true? Is it a difference between DTS and BCP or is it something else entirely? Can BCP be told to ignore the quotes? Can SQL Server be told to take the quotes out of the already entered data? I spent ALL DAY yesterday feeding BCP queries into 4 machines feeding QUOTED DATA into my database. BIG SIGH! John W. Colby www.ColbyConsulting.com From stuart at lexacorp.com.pg Sun Sep 5 20:31:40 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 06 Sep 2004 11:31:40 +1000 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: <000401c493a9$b2448e90$80b3fea9@ColbyM6805> References: <000201c4939e$abe03550$80b3fea9@ColbyM6805> Message-ID: <413C4A9C.6956.A0CAF2D@lexacorp.com.pg> On 5 Sep 2004 at 20:37, John W. Colby wrote: > I assume that the quotes are a valid strategy to allow commas to be embedded > in the data, i.e. if the comma is inside a pair of quotes it is data, if it > is outside, it is a field delimiter? > > Argh. Sigh. Beats head against sharp corner of filing cabinet. > Yep, it's a very common convention to quote all strings in CSV files. You always need to check your data format and define the appropriate import method before working with any text import method. Even the Access Import text Wizard has a selection box for "Text Qualifier" which offers the choice of double, single quotes or "~none~" -- Stuart From jwcolby at colbyconsulting.com Sun Sep 5 20:41:24 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 21:41:24 -0400 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: <000401c493a9$b2448e90$80b3fea9@ColbyM6805> Message-ID: <000601c493b2$a1c376e0$80b3fea9@ColbyM6805> BTW my BCP query looks like: BULK INSERT conduit..conduit FROM 'z:\Conduit001.txt' WITH ( DATAFILETYPE = 'char', FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 8:37 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Quotes in data part II I assume that the quotes are a valid strategy to allow commas to be embedded in the data, i.e. if the comma is inside a pair of quotes it is data, if it is outside, it is a field delimiter? Argh. Sigh. Beats head against sharp corner of filing cabinet. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Sun Sep 5 20:43:00 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 06 Sep 2004 11:43:00 +1000 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <000301c493a9$4b64cd70$80b3fea9@ColbyM6805> References: <000201c4939e$abe03550$80b3fea9@ColbyM6805> Message-ID: <413C4D44.10836.A170FEA@lexacorp.com.pg> On 5 Sep 2004 at 20:34, John W. Colby wrote: > > Argh! Can this be true? Yes :-( > Can SQL Server be told to take the quotes out of the already entered > data? Yes, with a huge update query. For each field, you will need to update myFiled to Case Left(myField,1) When Char(34) Then Replace(myField,Char(34),'') Else myField End Case To do so, I'd run a lot of updates again small subsets of the data (possibly based on first character of lastname and firstname, stepping through from "A,A" to "Z,Z") Otherwise you will end up with huge transaction logs -- Stuart From jwcolby at colbyconsulting.com Sun Sep 5 20:40:57 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 05 Sep 2004 21:40:57 -0400 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: <413C4A9C.6956.A0CAF2D@lexacorp.com.pg> Message-ID: <000501c493b2$9180b6d0$80b3fea9@ColbyM6805> >Yep, it's a very common convention to quote all strings in CSV files. You always need to check your data format and define the appropriate import method before working with any text import method. Well that's the thing, I know NOTHING about this stuff. The DTS handled it correctly. I didn't (still don't) see any parameters to BCP to tell it to get rid of the quotes. I thought BCP would work just like DTS. Not so apparently. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Stuart McLachlan Sent: Sunday, September 05, 2004 9:32 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Quotes in data part II On 5 Sep 2004 at 20:37, John W. Colby wrote: > I assume that the quotes are a valid strategy to allow commas to be > embedded in the data, i.e. if the comma is inside a pair of quotes it > is data, if it is outside, it is a field delimiter? > > Argh. Sigh. Beats head against sharp corner of filing cabinet. > Yep, it's a very common convention to quote all strings in CSV files. You always need to check your data format and define the appropriate import method before working with any text import method. Even the Access Import text Wizard has a selection box for "Text Qualifier" which offers the choice of double, single quotes or "~none~" -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Sun Sep 5 20:49:17 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 06 Sep 2004 11:49:17 +1000 Subject: [dba-SQLServer] Optimizing nVLDB databases In-Reply-To: <000101c4939d$92991130$80b3fea9@ColbyM6805> References: <413C1F0D.1375.96296BE@lexacorp.com.pg> Message-ID: <413C4EBD.2897.A1CD18C@lexacorp.com.pg> On 5 Sep 2004 at 19:10, John W. Colby wrote: > >SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < > 2 > > When I did this, it didn't give an error but it returned 0 rows. What is > indid? > Take a look at http://msdn.microsoft.com/library/default.asp?url=/library/en- us/tsqlref/ts_sys-i_76wj.asp (That should be all one line) -- Stuart From stuart at lexacorp.com.pg Sun Sep 5 21:37:38 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 06 Sep 2004 12:37:38 +1000 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <000301c493a9$4b64cd70$80b3fea9@ColbyM6805> References: <000201c4939e$abe03550$80b3fea9@ColbyM6805> Message-ID: <413C5A12.13447.A4917D6@lexacorp.com.pg> On 5 Sep 2004 at 20:34, John W. Colby wrote: Can BCP be told to ignore the quotes? You need to edit the BCP format file to specify the delimiters for each field. For a four field inport where the fields are 1 - Numeric 2 - Quoted string 3 - Quoted string 4 - Numeric ( ie 111,"John","Colby",99) 1 SQLCHAR 0 0 ",\"" 1 f1 "" 2 SQLCHAR 0 0 "\",\"" 2 f2 "" 3 SQLCHAR 0 0 "\"," 3 f3 "" 4 SQLCHAR 0 0 "\"\n" 4 f4 "" -- Stuart From stuart at lexacorp.com.pg Sun Sep 5 21:51:16 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Mon, 06 Sep 2004 12:51:16 +1000 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <413C5A12.13447.A4917D6@lexacorp.com.pg> References: <000301c493a9$4b64cd70$80b3fea9@ColbyM6805> Message-ID: <413C5D44.11208.A559326@lexacorp.com.pg> On 6 Sep 2004 at 12:37, Stuart McLachlan wrote: > 4 SQLCHAR 0 0 "\"\n" 4 f4 "" Correction Just "\n" for the last field. -- Stuart From accessd at shaw.ca Sun Sep 5 23:00:46 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Sun, 05 Sep 2004 21:00:46 -0700 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: <000601c493b2$a1c376e0$80b3fea9@ColbyM6805> Message-ID: Hi John: I Have been following the data saga and have been quiet enjoying it. Not it a sadistic way but in a sympathetic way. I am glad it is you and not me. Here is some observations and suggestions. 1. Why not import the data is chunks. Clean up each chunk by change the 'Y' and 'N' fields to True and False (1,0). Then port the data to the finally database. This could even be on another computer. Start partially breaking the data up along the lines described in that Warehousing article I posted a couple of days ago. This process could be accomplished by a SP. The Database/tables holding the cleaned up data would be far smaller than the raw imported data. The conversion would take longer to get all the data imported but in actual fact the whole process would be shorter. (Note if you are using a couple of computers connected across a LAN, a Switch is a better choice that a Hub.) 2. Do not even think of putting keys on the tables until all the data has been imported. Nothing consumes resources and slows performance more than having to manage keys when massive amounts of data is being imported. 3. Trying to do an exact match is almost impossible. There will be a number of matches but then there is the data where the person has changed their name, their name has been misspelled, a number of times, they have moved residence etc..etc... A lot of the matching can be done by the programs but it comes down grunt work...like ten cloistered data-clerks each with a 5 year contract. These comments may have already been covered or totally redundant but I hope there is something there that will help out. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 6:41 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in data part II BTW my BCP query looks like: BULK INSERT conduit..conduit FROM 'z:\Conduit001.txt' WITH ( DATAFILETYPE = 'char', FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 8:37 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Quotes in data part II I assume that the quotes are a valid strategy to allow commas to be embedded in the data, i.e. if the comma is inside a pair of quotes it is data, if it is outside, it is a field delimiter? Argh. Sigh. Beats head against sharp corner of filing cabinet. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 5 23:57:43 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 00:57:43 -0400 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: Message-ID: <000701c493ce$0e9cf550$80b3fea9@ColbyM6805> Aww you're just sadistic at heart! ;-) In fact I have now had this thing in my lap for well over two weeks and still haven't managed to get it into a SQL Server db, though I FEEL it getting closer. The data is (IMHO) simply too BIG to do what you are talking about. We are talking about 600 fields here. The data is broken down into 22 files of 10 gb of raw data, each containing 3 million records of 600 fields. That is a LOT of data. Imagine that it takes "ONLY" 1/2 hour to clean up each field, we are now talking 300 hours. Would it really "only" take 1/2 hour to clean up one field in 22 files containing (a total of) 65 million records. Doubtful to say the least. If cleanup is going to occur, I think it will have to occur inside the database once it is imported. I can do the cleanup over months once I am up and running. As for the LAN thing, I currently have a 100 mbit router, with two very fast computers (3ghz a64 with 2g memory apiece), my laptop (A64 with 512mb ram) and 2 older desktop machines, 2.5g and 1.3g Athlons with 512g ram. The new A64s have 1 gigabit NICs so I have ordered an 8 port gigabit switch. As much moving of the raw data as I am doing that can't hurt. I am waffling re whether a gigabit nic would help the other two desktops - it seems unlikely but maybe. In the end my wife's computer (the 1.3g Athlon) will go back to being her machine, I am only really using it for the import, and SHE has no need for gigabit LAN. I am using one of the new A64s as the SQL Server. I am using the other 4 machines to "throw data" at the server. It actually was working well, until I discovered the "quote" problem. Sigh. Each 10g file takes about 45 minutes to an hour to bulk insert in to the server. With 4 machines hammering away I was actually making progress. Now I have to "start over" but once I do and get all the machines rolling again I should be in business shortly - assuming I don't find something else to get in the way. The biggest show stopper was simply getting the storage on line and figuring out that I could use multiple files for the db instead of one big file. In fact it appears that having multiple files is faster than a single file. I REALLY wanted to get my system set up with a Gig of storage, Raid 1 so that the reliability would be there from the start. It turns out that in my economic bracket it simply isn't possible to get that much storage, with hardware raid, in a single box. I don't have the bucks for SCSI and with a realistic max 250gb hard disk size, 1g (raid 1) is 8 drives. Plus the system disk, plus the cd rom. In fact it will be possible but only with PCI raid controllers, which in this case are slower because of having to deal with the PCI bus. My new motherboards can deal with 8 disks directly, 4 SATA and 4 IDE, directly off the Nvidia chip (doesn't go through the PCI bus) but that leaves the system disk (disks if you raid that too) and the CD. Something's gotta give! Since I MUST show results I backed off, set up 1G (4 SATA Drives) NON Raid plus the system disk (non raid), CD and another 250g IDE drive just for misc. stuff (raw data anyone?). That's working fine but somewhere down the line I will be faced with bringing it up to snuff. Or building a perfect backup system. Wouldn't you know my reading of SQL Server's backup says that a backup can only go to a hard disk on the same machine. HELLO... Unless the backup is highly compressed we are now talking another terrabyte. How do you backup a terrabyte? OK, now how do you do it on my budget? Anyway... I have always been the kind that would bravely say "Yea I can do that", and then go learn how to do it. This is one of those experiences... IN SPADES. But I WILL persevere! John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Monday, September 06, 2004 12:01 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in data part II Hi John: I Have been following the data saga and have been quiet enjoying it. Not it a sadistic way but in a sympathetic way. I am glad it is you and not me. Here is some observations and suggestions. 1. Why not import the data is chunks. Clean up each chunk by change the 'Y' and 'N' fields to True and False (1,0). Then port the data to the finally database. This could even be on another computer. Start partially breaking the data up along the lines described in that Warehousing article I posted a couple of days ago. This process could be accomplished by a SP. The Database/tables holding the cleaned up data would be far smaller than the raw imported data. The conversion would take longer to get all the data imported but in actual fact the whole process would be shorter. (Note if you are using a couple of computers connected across a LAN, a Switch is a better choice that a Hub.) 2. Do not even think of putting keys on the tables until all the data has been imported. Nothing consumes resources and slows performance more than having to manage keys when massive amounts of data is being imported. 3. Trying to do an exact match is almost impossible. There will be a number of matches but then there is the data where the person has changed their name, their name has been misspelled, a number of times, they have moved residence etc..etc... A lot of the matching can be done by the programs but it comes down grunt work...like ten cloistered data-clerks each with a 5 year contract. These comments may have already been covered or totally redundant but I hope there is something there that will help out. Jim From jwcolby at colbyconsulting.com Mon Sep 6 00:20:40 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 01:20:40 -0400 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: <000701c493ce$0e9cf550$80b3fea9@ColbyM6805> Message-ID: <000801c493d1$436f44b0$80b3fea9@ColbyM6805> LOL. I keep talking about 1 g of data. Of course it is turning out to be almost 1 T. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Monday, September 06, 2004 12:58 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in data part II Aww you're just sadistic at heart! ;-) In fact I have now had this thing in my lap for well over two weeks and still haven't managed to get it into a SQL Server db, though I FEEL it getting closer. The data is (IMHO) simply too BIG to do what you are talking about. We are talking about 600 fields here. The data is broken down into 22 files of 10 gb of raw data, each containing 3 million records of 600 fields. That is a LOT of data. Imagine that it takes "ONLY" 1/2 hour to clean up each field, we are now talking 300 hours. Would it really "only" take 1/2 hour to clean up one field in 22 files containing (a total of) 65 million records. Doubtful to say the least. If cleanup is going to occur, I think it will have to occur inside the database once it is imported. I can do the cleanup over months once I am up and running. As for the LAN thing, I currently have a 100 mbit router, with two very fast computers (3ghz a64 with 2g memory apiece), my laptop (A64 with 512mb ram) and 2 older desktop machines, 2.5g and 1.3g Athlons with 512g ram. The new A64s have 1 gigabit NICs so I have ordered an 8 port gigabit switch. As much moving of the raw data as I am doing that can't hurt. I am waffling re whether a gigabit nic would help the other two desktops - it seems unlikely but maybe. In the end my wife's computer (the 1.3g Athlon) will go back to being her machine, I am only really using it for the import, and SHE has no need for gigabit LAN. I am using one of the new A64s as the SQL Server. I am using the other 4 machines to "throw data" at the server. It actually was working well, until I discovered the "quote" problem. Sigh. Each 10g file takes about 45 minutes to an hour to bulk insert in to the server. With 4 machines hammering away I was actually making progress. Now I have to "start over" but once I do and get all the machines rolling again I should be in business shortly - assuming I don't find something else to get in the way. The biggest show stopper was simply getting the storage on line and figuring out that I could use multiple files for the db instead of one big file. In fact it appears that having multiple files is faster than a single file. I REALLY wanted to get my system set up with a Gig of storage, Raid 1 so that the reliability would be there from the start. It turns out that in my economic bracket it simply isn't possible to get that much storage, with hardware raid, in a single box. I don't have the bucks for SCSI and with a realistic max 250gb hard disk size, 1g (raid 1) is 8 drives. Plus the system disk, plus the cd rom. In fact it will be possible but only with PCI raid controllers, which in this case are slower because of having to deal with the PCI bus. My new motherboards can deal with 8 disks directly, 4 SATA and 4 IDE, directly off the Nvidia chip (doesn't go through the PCI bus) but that leaves the system disk (disks if you raid that too) and the CD. Something's gotta give! Since I MUST show results I backed off, set up 1G (4 SATA Drives) NON Raid plus the system disk (non raid), CD and another 250g IDE drive just for misc. stuff (raw data anyone?). That's working fine but somewhere down the line I will be faced with bringing it up to snuff. Or building a perfect backup system. Wouldn't you know my reading of SQL Server's backup says that a backup can only go to a hard disk on the same machine. HELLO... Unless the backup is highly compressed we are now talking another terrabyte. How do you backup a terrabyte? OK, now how do you do it on my budget? Anyway... I have always been the kind that would bravely say "Yea I can do that", and then go learn how to do it. This is one of those experiences... IN SPADES. But I WILL persevere! John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Monday, September 06, 2004 12:01 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in data part II Hi John: I Have been following the data saga and have been quiet enjoying it. Not it a sadistic way but in a sympathetic way. I am glad it is you and not me. Here is some observations and suggestions. 1. Why not import the data is chunks. Clean up each chunk by change the 'Y' and 'N' fields to True and False (1,0). Then port the data to the finally database. This could even be on another computer. Start partially breaking the data up along the lines described in that Warehousing article I posted a couple of days ago. This process could be accomplished by a SP. The Database/tables holding the cleaned up data would be far smaller than the raw imported data. The conversion would take longer to get all the data imported but in actual fact the whole process would be shorter. (Note if you are using a couple of computers connected across a LAN, a Switch is a better choice that a Hub.) 2. Do not even think of putting keys on the tables until all the data has been imported. Nothing consumes resources and slows performance more than having to manage keys when massive amounts of data is being imported. 3. Trying to do an exact match is almost impossible. There will be a number of matches but then there is the data where the person has changed their name, their name has been misspelled, a number of times, they have moved residence etc..etc... A lot of the matching can be done by the programs but it comes down grunt work...like ten cloistered data-clerks each with a 5 year contract. These comments may have already been covered or totally redundant but I hope there is something there that will help out. Jim _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jmoss111 at bellsouth.net Mon Sep 6 03:11:39 2004 From: jmoss111 at bellsouth.net (JMoss) Date: Mon, 6 Sep 2004 03:11:39 -0500 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: <001c01c49356$ef382080$80b3fea9@ColbyM6805> Message-ID: John, We used tools from PeopleSmith for hygiene and deduping: RightFielder refielded badly fielded lists, recognized and refielded names from company names, refielded primary from secondary addresses; Personator split and reformatted full or inverse names into prefix, first name, middle name, last name, suffix, creating genderized prefixes, and split addresses; Styleist propercased names, addresses, corrected punctuation, and expanded abbreviations; and DoubleTake internally parses names and addresses into parts, ie street name is parsed in street number, street name, street direction (like N or North), etc into matchcodes and deduped. To build a match key basically in the manner that DoubleTake does, use the first three characters of the first name, after ensuring that the prefix or title was removed from the first name amd placed in a prefix column, the first four characters of the last name, and ensured that any suffix was removed from the end of last name and placed in a suffix column, the first five characters of company name, the first seven characters of address line 1, the last two characters of address line 1, the first five characters of address line 2, the first five characters of city, and the first five characters of zip, after ensuring that zip is formatted properly, and has been parsed into zip and zip plus 4. Also, we used householding upon customer's request. We performed these steps preceding the data load via DataJunction process in a template file, which was then loaded into production databases. You might want to test the number of characters used in each field used in the match key and see what type of results that you get, because we used specialized tools for this process. We never indexed a match key nor did one exist in the prod db, only before the load on the template file. But we did do a merge/purge weekly or after a large ETL process. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:45 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update MATCH FIELD! That's the name they called it. What I haven't discovered is whether the field is actually required or whether a unique index on all these fields prevents dupes and I'm done. It seems like an extra step to pull those X characters out, append them all together, then drop them in a new field. They insist that it is needed but they don't understand databases. I understand databases but I don't know their business. I suspect that this new "match field" is then indexed to prevent dupes. Is it used for anything else? Is it a standard definition (how many characters from which fields)? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Sunday, September 05, 2004 10:26 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, What the client is talking about is taking x characters from first name, x characters from last name, x characters from address1, x characters from city, x characters from state, and x characters from the zip code and creating a match key field from that. Other criteria could be added to ensure the uniqueness of the record. Then you could use something like http://www.codeproject.com/database/xp_md5.asp to build a hash key. Either key could be used for purposes of deduping. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:07 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Mon Sep 6 08:15:20 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 09:15:20 -0400 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: Message-ID: <001001c49413$93087130$80b3fea9@ColbyM6805> Thanks for the MATCH FIELD definition. It's always good to know what others found to work. I wonder if a match field ends up unique, it certainly sounds as if it would be. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Monday, September 06, 2004 4:12 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, We used tools from PeopleSmith for hygiene and deduping: RightFielder refielded badly fielded lists, recognized and refielded names from company names, refielded primary from secondary addresses; Personator split and reformatted full or inverse names into prefix, first name, middle name, last name, suffix, creating genderized prefixes, and split addresses; Styleist propercased names, addresses, corrected punctuation, and expanded abbreviations; and DoubleTake internally parses names and addresses into parts, ie street name is parsed in street number, street name, street direction (like N or North), etc into matchcodes and deduped. To build a match key basically in the manner that DoubleTake does, use the first three characters of the first name, after ensuring that the prefix or title was removed from the first name amd placed in a prefix column, the first four characters of the last name, and ensured that any suffix was removed from the end of last name and placed in a suffix column, the first five characters of company name, the first seven characters of address line 1, the last two characters of address line 1, the first five characters of address line 2, the first five characters of city, and the first five characters of zip, after ensuring that zip is formatted properly, and has been parsed into zip and zip plus 4. Also, we used householding upon customer's request. We performed these steps preceding the data load via DataJunction process in a template file, which was then loaded into production databases. You might want to test the number of characters used in each field used in the match key and see what type of results that you get, because we used specialized tools for this process. We never indexed a match key nor did one exist in the prod db, only before the load on the template file. But we did do a merge/purge weekly or after a large ETL process. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:45 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update MATCH FIELD! That's the name they called it. What I haven't discovered is whether the field is actually required or whether a unique index on all these fields prevents dupes and I'm done. It seems like an extra step to pull those X characters out, append them all together, then drop them in a new field. They insist that it is needed but they don't understand databases. I understand databases but I don't know their business. I suspect that this new "match field" is then indexed to prevent dupes. Is it used for anything else? Is it a standard definition (how many characters from which fields)? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Sunday, September 05, 2004 10:26 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, What the client is talking about is taking x characters from first name, x characters from last name, x characters from address1, x characters from city, x characters from state, and x characters from the zip code and creating a match key field from that. Other criteria could be added to ensure the uniqueness of the record. Then you could use something like http://www.codeproject.com/database/xp_md5.asp to build a hash key. Either key could be used for purposes of deduping. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:07 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From artful at rogers.com Mon Sep 6 09:47:49 2004 From: artful at rogers.com (Arthur Fuller) Date: Mon, 6 Sep 2004 10:47:49 -0400 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: <000801c493d1$436f44b0$80b3fea9@ColbyM6805> Message-ID: <006101c49420$7b3e95e0$6501a8c0@rock> Perhaps you should have gone with a pair of 1TB LaCie drives, JC, or two pairs. A long time ago somebody told me, "Always get enough space so the disk is never more than half full. Then you can do anything you want." -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Monday, September 06, 2004 1:21 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in data part II LOL. I keep talking about 1 g of data. Of course it is turning out to be almost 1 T. John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Mon Sep 6 10:14:26 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 11:14:26 -0400 Subject: [dba-SQLServer] Quotes in data part II In-Reply-To: <006101c49420$7b3e95e0$6501a8c0@rock> Message-ID: <000101c49424$36fc62a0$80b3fea9@ColbyM6805> All of these drives are EXTERNAL drives, connecting via usb or firewire. That simply isn't fast enough. Those are really cool for offline storage, images and such but for a SQL Server database I don't think they would cut the mustard. I have an external 250mb drive from Seagate connected via usb2. It is wicked fast (for external) but it doesn't even come close to drives hooked directly to a motherboard. PLUS the cpu overhead is enormous. Apparently the CPU is involved in pumping the data through the USB port. The USB drive sucks up somewhere around 40% of the CPU cycles when it is reading or writing. I spent a LOT of money on modern motherboards, processors and memory to build a fast server for this thing. The new Nvidia chip for the AMD64 ties 4 SATA ports and 2 IDE ports directly to the "northbridge" (though there is only one chip now) as well as tying the gigabit NIC directly to the chip. It is the fastest thing available at the moment exactly because it has a single massive chip with all these peripherals tied directly to the Athlon64's data bus, and then that chip takes over much of the processing for the data moving in and out of the machine. The only thing on my backplane is a lonely little graphics card. ;-) BELIEVE ME, LaCie is NOT the answer. They are nice drives, just not for a database server. They might do nicely for the database backup though (rubs his bleary eyes thoughtfully...). John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Monday, September 06, 2004 10:48 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in data part II Perhaps you should have gone with a pair of 1TB LaCie drives, JC, or two pairs. A long time ago somebody told me, "Always get enough space so the disk is never more than half full. Then you can do anything you want." -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Monday, September 06, 2004 1:21 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in data part II LOL. I keep talking about 1 g of data. Of course it is turning out to be almost 1 T. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Mon Sep 6 12:34:59 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 13:34:59 -0400 Subject: [dba-SQLServer] DTS Processor / memory usage - ramblings In-Reply-To: <000101c49424$36fc62a0$80b3fea9@ColbyM6805> Message-ID: <000401c49437$d8376e40$80b3fea9@ColbyM6805> As I mentioned, I have 4 machines simultaneously importing data into a SQL Server database, using DTS from inside of SQL EM. The raw data files are 10gb each, with 3 million records, 660 fields, comma delimited, text files. Neo2 - The server is using 4 data files on separate 250g SATA drives and a log file on a 250g IDE drive. I have set the Server to expand the database in 1gb chunks, unlimited expansion. NEO2 - The server, A64 3ghz with 2gb ram NEO1 - A workstation, A64 3ghz with 2.5gb ram Soltek1 - A workstation, AMD Athlon 2.5ghz with 750mb ram Marys - A workstation, AMD Athlon 1.2ghz with 384mb ram All the machines except the server (NEO2) have attached the SQL Server on NEO2 to their SQL Server Group. Performance (very rough eyeballed) The server is using 1858340k of RAM (1.8g) and 100% of the processor cycles except when it goes out to expand the database files, when the processor usage drops close to 0. So the server is maxed out. Importing 11k records every 30 seconds. Neo1 - Is using 199044k memory and ~35% of the processor cycles, except when the server expands the data files, when the processor usage drops close to 0. Importing 14K records every 30 seconds. Soltek1 - Is using 148476K of ram and ~60% of processor cycles except during file expansion. Notice this processor only has 750m of physical ram which means it is using the swap file somehow. Importing 14k records in 30 seconds. Marys - is using 330884K of ram and 75% of processing power except during file expansion. Notice this machine has only 380M or ram and is using most of it but NOT loading up the swap file. However I haven't done anything else on this machine either. Importing 10K records every 30 seconds. So I am loading 49K records every 30 seconds across all the machines. All the machines are pulling the raw data off a directory on the server. The next set of imports will be pulling the raw data off directories on Neo1. John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Mon Sep 6 12:44:15 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 13:44:15 -0400 Subject: [dba-SQLServer] HELP - Log file growing rapidly In-Reply-To: <000101c49424$36fc62a0$80b3fea9@ColbyM6805> Message-ID: <000501c49439$23b3d010$80b3fea9@ColbyM6805> I am importing this data into the server. The log file is growing rapidly. It is currently at 23g, where the 4 data files are around 13g each. Is this natural or is it growing this rapidly because I am importing simultaneously from the 4 different machines? With 144g of free space on the drive that the log file is on, I am in no danger of running out of room, certainly not before the current set of imports finish. However it appears that I do need to figure out how to backup the log files so that I can re-use the room for the next set of imports. John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Mon Sep 6 13:30:45 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 14:30:45 -0400 Subject: [dba-SQLServer] Cannot obtain a lock In-Reply-To: <000501c49439$23b3d010$80b3fea9@ColbyM6805> Message-ID: <000601c4943f$a3043930$80b3fea9@ColbyM6805> Well, if it ain't one thing its another. NOW... The server has stopped processing and all the workstations are stopped as well. When I try to look at the server properties I get: "The SQL Server cannot obtain a lock resource at this time. Rerun your statement when there are fewer users or ask the system administrator to check the SQL Server lock and memory configuration". Unfortunately I can't check the server lock and memory configuration because I get an error... "can't obtain a lock resource..." I am trying to cancel one of the 4 machines import to see if that frees up enough resources to go look at the problem, but I have to wonder if the rollback can be done if it can't "get a lock resource". Is there anything that can be done here? John W. Colby www.ColbyConsulting.com From jmoss111 at bellsouth.net Mon Sep 6 11:07:00 2004 From: jmoss111 at bellsouth.net (JMoss) Date: Mon, 6 Sep 2004 11:07:00 -0500 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: <001001c49413$93087130$80b3fea9@ColbyM6805> Message-ID: John, Even with that type of match key you will have dups, either because of poor data entry, people giving their names or addresses a bit differently, same address but without a zip code, with high volume records like that you will see it all. The worst records to convert are records obtained from web sites, where people leave their contact information, lots of garbage. We did CRM for the NBA and various major league sports franchises and they were the hardest to ETL. Are you running the A64 on a 32 bit OS and if so do you really think that you see a good performance boost from the 64 bit architecture? -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 8:15 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Thanks for the MATCH FIELD definition. It's always good to know what others found to work. I wonder if a match field ends up unique, it certainly sounds as if it would be. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Monday, September 06, 2004 4:12 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, We used tools from PeopleSmith for hygiene and deduping: RightFielder refielded badly fielded lists, recognized and refielded names from company names, refielded primary from secondary addresses; Personator split and reformatted full or inverse names into prefix, first name, middle name, last name, suffix, creating genderized prefixes, and split addresses; Styleist propercased names, addresses, corrected punctuation, and expanded abbreviations; and DoubleTake internally parses names and addresses into parts, ie street name is parsed in street number, street name, street direction (like N or North), etc into matchcodes and deduped. To build a match key basically in the manner that DoubleTake does, use the first three characters of the first name, after ensuring that the prefix or title was removed from the first name amd placed in a prefix column, the first four characters of the last name, and ensured that any suffix was removed from the end of last name and placed in a suffix column, the first five characters of company name, the first seven characters of address line 1, the last two characters of address line 1, the first five characters of address line 2, the first five characters of city, and the first five characters of zip, after ensuring that zip is formatted properly, and has been parsed into zip and zip plus 4. Also, we used householding upon customer's request. We performed these steps preceding the data load via DataJunction process in a template file, which was then loaded into production databases. You might want to test the number of characters used in each field used in the match key and see what type of results that you get, because we used specialized tools for this process. We never indexed a match key nor did one exist in the prod db, only before the load on the template file. But we did do a merge/purge weekly or after a large ETL process. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:45 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update MATCH FIELD! That's the name they called it. What I haven't discovered is whether the field is actually required or whether a unique index on all these fields prevents dupes and I'm done. It seems like an extra step to pull those X characters out, append them all together, then drop them in a new field. They insist that it is needed but they don't understand databases. I understand databases but I don't know their business. I suspect that this new "match field" is then indexed to prevent dupes. Is it used for anything else? Is it a standard definition (how many characters from which fields)? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Sunday, September 05, 2004 10:26 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, What the client is talking about is taking x characters from first name, x characters from last name, x characters from address1, x characters from city, x characters from state, and x characters from the zip code and creating a match key field from that. Other criteria could be added to ensure the uniqueness of the record. Then you could use something like http://www.codeproject.com/database/xp_md5.asp to build a hash key. Either key could be used for purposes of deduping. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:07 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Mon Sep 6 13:56:47 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 14:56:47 -0400 Subject: [dba-SQLServer] SQL Server Hash algorithm - question, then rambling update In-Reply-To: Message-ID: <000701c49443$4402a440$80b3fea9@ColbyM6805> >Are you running the A64 on a 32 bit OS and if so do you really think that you see a good performance boost from the 64 bit architecture? No idea. I intend to find out though. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Monday, September 06, 2004 12:07 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Even with that type of match key you will have dups, either because of poor data entry, people giving their names or addresses a bit differently, same address but without a zip code, with high volume records like that you will see it all. The worst records to convert are records obtained from web sites, where people leave their contact information, lots of garbage. We did CRM for the NBA and various major league sports franchises and they were the hardest to ETL. Are you running the A64 on a 32 bit OS and if so do you really think that you see a good performance boost from the 64 bit architecture? -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 8:15 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Thanks for the MATCH FIELD definition. It's always good to know what others found to work. I wonder if a match field ends up unique, it certainly sounds as if it would be. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Monday, September 06, 2004 4:12 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, We used tools from PeopleSmith for hygiene and deduping: RightFielder refielded badly fielded lists, recognized and refielded names from company names, refielded primary from secondary addresses; Personator split and reformatted full or inverse names into prefix, first name, middle name, last name, suffix, creating genderized prefixes, and split addresses; Styleist propercased names, addresses, corrected punctuation, and expanded abbreviations; and DoubleTake internally parses names and addresses into parts, ie street name is parsed in street number, street name, street direction (like N or North), etc into matchcodes and deduped. To build a match key basically in the manner that DoubleTake does, use the first three characters of the first name, after ensuring that the prefix or title was removed from the first name amd placed in a prefix column, the first four characters of the last name, and ensured that any suffix was removed from the end of last name and placed in a suffix column, the first five characters of company name, the first seven characters of address line 1, the last two characters of address line 1, the first five characters of address line 2, the first five characters of city, and the first five characters of zip, after ensuring that zip is formatted properly, and has been parsed into zip and zip plus 4. Also, we used householding upon customer's request. We performed these steps preceding the data load via DataJunction process in a template file, which was then loaded into production databases. You might want to test the number of characters used in each field used in the match key and see what type of results that you get, because we used specialized tools for this process. We never indexed a match key nor did one exist in the prod db, only before the load on the template file. But we did do a merge/purge weekly or after a large ETL process. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:45 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update MATCH FIELD! That's the name they called it. What I haven't discovered is whether the field is actually required or whether a unique index on all these fields prevents dupes and I'm done. It seems like an extra step to pull those X characters out, append them all together, then drop them in a new field. They insist that it is needed but they don't understand databases. I understand databases but I don't know their business. I suspect that this new "match field" is then indexed to prevent dupes. Is it used for anything else? Is it a standard definition (how many characters from which fields)? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Sunday, September 05, 2004 10:26 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, What the client is talking about is taking x characters from first name, x characters from last name, x characters from address1, x characters from city, x characters from state, and x characters from the zip code and creating a match key field from that. Other criteria could be added to ensure the uniqueness of the record. Then you could use something like http://www.codeproject.com/database/xp_md5.asp to build a hash key. Either key could be used for purposes of deduping. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 05, 2004 9:07 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Eric, >Why not use Merge Replication to merge your data? Maybe pure ignorance. As I have said I know nothing about SQL Server, learning as I go. However I am merging data in from a set of 22 comma delimited text files. From the word "replication" I assumed that this would merge data already in a sql server database. >For a unique id you can use the uniqueidentifier field type Precisely. I will be using a uniqueidentifier field as my pk. However I also need a unique index to prevent putting the same record in the database twice. This is a database of people in the US. I need something to attempt to recognize John Colby in Northfield CT as already in the db and not put a second copy in the database. The client says that the industry uses a field where they take N characters from field A and M characters from field B and O characters from field C etc. I haven't seen any sign of such a field in the data that I am importing, but they keep saying we need such a thing. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Sunday, September 05, 2004 3:27 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update John, Why not use Merge Replication to merge your data?' For a unique id you can use the uniqueidentifier field type. Select yes for the "is rowguid" option. SQL server creates a unique 16-bit id that you can use. In fact this is the field type used by the field when you include the table for replication. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 04, 2004 11:35 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Hash algorithm - question,then rambling update Does anyone know of a robust hash algorithm for SQL Server? I need to generate a unique id (natural PK) for the nVLDB address database I am working on. I currently have a gaggle (as opposed to a google) of machines busily importing the raw data into the database and once that is done I have to build a unique identifier that can be used to find an address record if it exists for merges and updates. Anyone who has worked on databases like this feel free to pipe in with how you did this. Rambling update - I have been building a pair of machines with the MSI Nforce3 motherboard for the ATHLON64 - http://www.newegg.com/app/viewProductDesc.asp?description=13-130-457&depa=0 With a 3ghz A64 processor. It promises 4 SATA channels and 2 IDE channels for 8 total devices. It promises that raids can be built from any combination of drives. I had hoped to set up a bunch of raid 1 containing disk pairs to hold the database but in the end had a heck of a time trying to get the raid to function reliably. I was trying to set up a Raid 1 boot disk and spent literally days installing and reinstalling trying to get that working. Never did. I did however get a REAL PERSON at MSI to talk to and with luck will sort that problem next week. I finally simply had to give up for now and get on with the show since I need results next week. As a result I built one of the machines with (4) SATA 250gb disks for a terrabyte of storage, threw in another 250 g drive on an IDE channel to hold the log file and the raw data files from which the import is happening. This whole thing has been an exercise in humility I must admit, with much time spent going nowhere. I purchased (2) 1g Dimms a few weeks ago for an existing Athlon machine (my old desktop) and when I purchased the pair of motherboards I purchased (3) more gig sticks, one of which turns out to be bad. It turns out that the bad stick was one of two I opened to just test the ram and so wrongly came to the conclusion that the ram simply did not work with these motherboards. Luckily after getting one of the motherboards up I went back "one last time" to try and manually tweak the ram parameters and see if they would work (didn't want to RMA them if I didn't have to) and discovered that in fact 2 sticks did work. Long story medium long, TODAY I finally got ALL of the machines up and running. The two new motherboards and my original dev machine (which I had retired in favor of my Laptop). I am so tired of Windows installs I could spit. Thank the big cahuna for high speed internet. I also figured out how to get SQL Server to use multiple files on different disks as a single database, which given my total ignorance about SQL Server I consider to be a major victory. When I started this I thought I needed a single file on a HUGE disk. So I have the (4) 250gb SATA drives each holding a single mdf file for a total capacity of 1 terabyte. By my calculations the data will be around 600+ gbytes, giving me a little head room. A fifth 250gb drive will hold the indexes (assuming I can figure out how to tell SQL where to put them). I now have the second a64 machine, my old desktop, my new laptop and my Wife's desktop all running bcp queries dumping the raw data into the server. Each machine is also simultaneously unzipping a raw data file - ~350g zipped, 10g unzipped. Talk about saturating your network bandwidth! With luck, by this time tomorrow I will have all the data in and be able to start the REAL work. Tomorrow I have to figure out the unique index thing, plus start to look at (become familiar with) the data fields. I also want to build an "autonumber" PK. Eventually I would like to experiment with dividing the data out onto different machines. The database currently has 600 fields and I fear that I am about to bump into the 8k / record limitation that someone on the list has mentioned. If that happens I will have to divide the db vertically. I also need to build some views containing subsets of the fields to make analysis and exporting easier. So much remains to be done, and none of it could proceed until I got all the data in which was a waaaay bigger task than I anticipated. I have ordered the MS Action pack and somewhere down the road I hope to get a Windows 2003 server set up. I have heard rumors that it can run 64 bit mode, and that SQL Server can as well, so if that is true I will be testing a 64 bit system, perhaps setting up two identical systems, one 32 bit and one 64 bit for a side by side speed comparison. Of course I need to get some PAYING work done to allow me to do that. ;-) Anyway, that's all the news that's fit to print. Thanks to all the folks that have been giving me suggestions and reading materials. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Mon Sep 6 14:31:42 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Mon, 06 Sep 2004 12:31:42 -0700 Subject: [dba-SQLServer] HELP - Log file growing rapidly In-Reply-To: <000501c49439$23b3d010$80b3fea9@ColbyM6805> Message-ID: Hi John: Can you not put a limit of the log file's growth, have an alert set to notify you when the hard drive capacity has reached your set maximum then delete the log file. See article: http://www.winnetmag.com/Article/ArticleID/19761/19761.html HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 10:44 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] HELP - Log file growing rapidly I am importing this data into the server. The log file is growing rapidly. It is currently at 23g, where the 4 data files are around 13g each. Is this natural or is it growing this rapidly because I am importing simultaneously from the 4 different machines? With 144g of free space on the drive that the log file is on, I am in no danger of running out of room, certainly not before the current set of imports finish. However it appears that I do need to figure out how to backup the log files so that I can re-use the room for the next set of imports. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From ebarro at afsweb.com Mon Sep 6 15:23:45 2004 From: ebarro at afsweb.com (Eric Barro) Date: Mon, 6 Sep 2004 13:23:45 -0700 Subject: [dba-SQLServer] HELP - Log file growing rapidly In-Reply-To: <000501c49439$23b3d010$80b3fea9@ColbyM6805> Message-ID: John, Here's what I had posted earlier to deal with the growing log file issues... Overview: You will need to restrict the growth of the transaction log file and then set up a backup schedule for the log file followed by a shrink log file operation. 1. Right click on the database and then go into Properties. 2. Click the Transaction Log tab 3. Make sure to restrict file growth to a manageable size where your drive won't run out disk space 4. Click Options tab 5. Check the Auto shrink checkbox. This will allow SQL server to automatically shrink your database and log files to the minumum required. 6. Click OK and apply all those changes. 7. Go into the Management section of EM. 8. Select Backup 9. Create a backup set for your log files. Select the db, check Transaction Log radio button. Select destination and check the schedule box and specify when you want it to run. Depending on the amount of activity and the maximum amount you specified for the log file to grow you will need to specify a scheduled backup and shrink operation that will suit your needs. On our production environment I had it set to 1 hr intervals. 10. Set up a scheduled shrink log file operation. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 10:44 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] HELP - Log file growing rapidly I am importing this data into the server. The log file is growing rapidly. It is currently at 23g, where the 4 data files are around 13g each. Is this natural or is it growing this rapidly because I am importing simultaneously from the 4 different machines? With 144g of free space on the drive that the log file is on, I am in no danger of running out of room, certainly not before the current set of imports finish. However it appears that I do need to figure out how to backup the log files so that I can re-use the room for the next set of imports. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Mon Sep 6 15:49:46 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 16:49:46 -0400 Subject: [dba-SQLServer] HELP - Log file growing rapidly In-Reply-To: Message-ID: <000e01c49453$0cc6fb60$80b3fea9@ColbyM6805> Eric, Thanks for that detailed method of doing this stuff. I am working to figure out the backup piece as we speak. I had figured I would simply get all the data into the db then backup as soon as I finished, but I guess that simply isn't possible. I have a huge disk just for the log file, I don't much care if the log file grows huge as long as it works. When I first started this (two weeks ago) I was doing manual DTS operations to get each file into the db in sequence. I actually got about 11 of the files in (of 22 total files) but ran out of disk space - I was using (2) 200g drives Raid 0 for a 400g workspace and a single database file. The db file was 300g after 11 files so I could see that the rest of the data wasn't going to fit. I stopped, ordered a bunch of hardware, set up a dedicated machine, learned about multiple file databases, and set up 4 files plus a log file on 5 different 250g disks. It appears that WILL be enough space, probably even without backing up the log file. The log file 2 weeks ago (after 11 files) was only 11gb for a 300g data file so there is more to this than meets the eye. I now have a 250g drive dedicated to the log file for (4) data files which will (in the end) total by my estimations about 600 gb. The last few days have been agony simply because I was trying to speed things up and didn't know enough to do so. Problem after problem has caused me to waste the last two days. I have decided to just go back to my original method, and use DTS to import each file, one at a time, in sequence. That was working 2 weeks ago, and I PRAY it will work now. It will take me a full day to do it that way but then I have wasted TWO full days trying to do it a faster way. Ain't life grand? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Monday, September 06, 2004 4:24 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly John, Here's what I had posted earlier to deal with the growing log file issues... Overview: You will need to restrict the growth of the transaction log file and then set up a backup schedule for the log file followed by a shrink log file operation. 1. Right click on the database and then go into Properties. 2. Click the Transaction Log tab 3. Make sure to restrict file growth to a manageable size where your drive won't run out disk space 4. Click Options tab 5. Check the Auto shrink checkbox. This will allow SQL server to automatically shrink your database and log files to the minumum required. 6. Click OK and apply all those changes. 7. Go into the Management section of EM. 8. Select Backup 9. Create a backup set for your log files. Select the db, check Transaction Log radio button. Select destination and check the schedule box and specify when you want it to run. Depending on the amount of activity and the maximum amount you specified for the log file to grow you will need to specify a scheduled backup and shrink operation that will suit your needs. On our production environment I had it set to 1 hr intervals. 10. Set up a scheduled shrink log file operation. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 10:44 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] HELP - Log file growing rapidly I am importing this data into the server. The log file is growing rapidly. It is currently at 23g, where the 4 data files are around 13g each. Is this natural or is it growing this rapidly because I am importing simultaneously from the 4 different machines? With 144g of free space on the drive that the log file is on, I am in no danger of running out of room, certainly not before the current set of imports finish. However it appears that I do need to figure out how to backup the log files so that I can re-use the room for the next set of imports. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Mon Sep 6 17:12:25 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Mon, 06 Sep 2004 15:12:25 -0700 Subject: [dba-SQLServer] HELP - Log file growing rapidly In-Reply-To: <000e01c49453$0cc6fb60$80b3fea9@ColbyM6805> Message-ID: Hi John: Here is a site with information specifically on SQL 2000 that you may not have already seen. There is some very useful information on bulk importing of data: http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/en-us/p art2/c0661.mspx (watch for wrap) There is a great section on warehousing and using the OLAP tools. I hate to be a pessimist but it seems even if you had an over-clock Cray mainframe, suspended in a bath of liquid nitrogen you could not have all the data translated by this week so you may have to seriously think of managing your data in chunks so you will have something to demonstrate to your client. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 1:50 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly Eric, Thanks for that detailed method of doing this stuff. I am working to figure out the backup piece as we speak. I had figured I would simply get all the data into the db then backup as soon as I finished, but I guess that simply isn't possible. I have a huge disk just for the log file, I don't much care if the log file grows huge as long as it works. When I first started this (two weeks ago) I was doing manual DTS operations to get each file into the db in sequence. I actually got about 11 of the files in (of 22 total files) but ran out of disk space - I was using (2) 200g drives Raid 0 for a 400g workspace and a single database file. The db file was 300g after 11 files so I could see that the rest of the data wasn't going to fit. I stopped, ordered a bunch of hardware, set up a dedicated machine, learned about multiple file databases, and set up 4 files plus a log file on 5 different 250g disks. It appears that WILL be enough space, probably even without backing up the log file. The log file 2 weeks ago (after 11 files) was only 11gb for a 300g data file so there is more to this than meets the eye. I now have a 250g drive dedicated to the log file for (4) data files which will (in the end) total by my estimations about 600 gb. The last few days have been agony simply because I was trying to speed things up and didn't know enough to do so. Problem after problem has caused me to waste the last two days. I have decided to just go back to my original method, and use DTS to import each file, one at a time, in sequence. That was working 2 weeks ago, and I PRAY it will work now. It will take me a full day to do it that way but then I have wasted TWO full days trying to do it a faster way. Ain't life grand? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Monday, September 06, 2004 4:24 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly John, Here's what I had posted earlier to deal with the growing log file issues... Overview: You will need to restrict the growth of the transaction log file and then set up a backup schedule for the log file followed by a shrink log file operation. 1. Right click on the database and then go into Properties. 2. Click the Transaction Log tab 3. Make sure to restrict file growth to a manageable size where your drive won't run out disk space 4. Click Options tab 5. Check the Auto shrink checkbox. This will allow SQL server to automatically shrink your database and log files to the minumum required. 6. Click OK and apply all those changes. 7. Go into the Management section of EM. 8. Select Backup 9. Create a backup set for your log files. Select the db, check Transaction Log radio button. Select destination and check the schedule box and specify when you want it to run. Depending on the amount of activity and the maximum amount you specified for the log file to grow you will need to specify a scheduled backup and shrink operation that will suit your needs. On our production environment I had it set to 1 hr intervals. 10. Set up a scheduled shrink log file operation. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 10:44 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] HELP - Log file growing rapidly I am importing this data into the server. The log file is growing rapidly. It is currently at 23g, where the 4 data files are around 13g each. Is this natural or is it growing this rapidly because I am importing simultaneously from the 4 different machines? With 144g of free space on the drive that the log file is on, I am in no danger of running out of room, certainly not before the current set of imports finish. However it appears that I do need to figure out how to backup the log files so that I can re-use the room for the next set of imports. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From michael at ddisolutions.com.au Mon Sep 6 19:51:12 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Tue, 7 Sep 2004 10:51:12 +1000 Subject: [dba-SQLServer] HELP - Log file growing rapidly Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011B60@ddi-01.DDI.local> My fav sql site... http://www.sql-server-performance.com/default.asp Maybe you'll find something here to help. I've never been 'lucky' enough to work with a db that size yet! I'm waiting for a client send send me a DVD with about 10mill rows on it so I've been following with interest ;-) cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Tuesday, 7 September 2004 8:12 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly Hi John: Here is a site with information specifically on SQL 2000 that you may not have already seen. There is some very useful information on bulk importing of data: http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/en- us/p art2/c0661.mspx (watch for wrap) There is a great section on warehousing and using the OLAP tools. I hate to be a pessimist but it seems even if you had an over-clock Cray mainframe, suspended in a bath of liquid nitrogen you could not have all the data translated by this week so you may have to seriously think of managing your data in chunks so you will have something to demonstrate to your client. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 1:50 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly Eric, Thanks for that detailed method of doing this stuff. I am working to figure out the backup piece as we speak. I had figured I would simply get all the data into the db then backup as soon as I finished, but I guess that simply isn't possible. I have a huge disk just for the log file, I don't much care if the log file grows huge as long as it works. When I first started this (two weeks ago) I was doing manual DTS operations to get each file into the db in sequence. I actually got about 11 of the files in (of 22 total files) but ran out of disk space - I was using (2) 200g drives Raid 0 for a 400g workspace and a single database file. The db file was 300g after 11 files so I could see that the rest of the data wasn't going to fit. I stopped, ordered a bunch of hardware, set up a dedicated machine, learned about multiple file databases, and set up 4 files plus a log file on 5 different 250g disks. It appears that WILL be enough space, probably even without backing up the log file. The log file 2 weeks ago (after 11 files) was only 11gb for a 300g data file so there is more to this than meets the eye. I now have a 250g drive dedicated to the log file for (4) data files which will (in the end) total by my estimations about 600 gb. The last few days have been agony simply because I was trying to speed things up and didn't know enough to do so. Problem after problem has caused me to waste the last two days. I have decided to just go back to my original method, and use DTS to import each file, one at a time, in sequence. That was working 2 weeks ago, and I PRAY it will work now. It will take me a full day to do it that way but then I have wasted TWO full days trying to do it a faster way. Ain't life grand? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Monday, September 06, 2004 4:24 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly John, Here's what I had posted earlier to deal with the growing log file issues... Overview: You will need to restrict the growth of the transaction log file and then set up a backup schedule for the log file followed by a shrink log file operation. 1. Right click on the database and then go into Properties. 2. Click the Transaction Log tab 3. Make sure to restrict file growth to a manageable size where your drive won't run out disk space 4. Click Options tab 5. Check the Auto shrink checkbox. This will allow SQL server to automatically shrink your database and log files to the minumum required. 6. Click OK and apply all those changes. 7. Go into the Management section of EM. 8. Select Backup 9. Create a backup set for your log files. Select the db, check Transaction Log radio button. Select destination and check the schedule box and specify when you want it to run. Depending on the amount of activity and the maximum amount you specified for the log file to grow you will need to specify a scheduled backup and shrink operation that will suit your needs. On our production environment I had it set to 1 hr intervals. 10. Set up a scheduled shrink log file operation. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 10:44 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] HELP - Log file growing rapidly I am importing this data into the server. The log file is growing rapidly. It is currently at 23g, where the 4 data files are around 13g each. Is this natural or is it growing this rapidly because I am importing simultaneously from the 4 different machines? With 144g of free space on the drive that the log file is on, I am in no danger of running out of room, certainly not before the current set of imports finish. However it appears that I do need to figure out how to backup the log files so that I can re-use the room for the next set of imports. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From andrew.haslett at ilc.gov.au Mon Sep 6 20:30:56 2004 From: andrew.haslett at ilc.gov.au (Haslett, Andrew) Date: Tue, 7 Sep 2004 11:00:56 +0930 Subject: [dba-SQLServer] HELP - Log file growing rapidly Message-ID: <0A870603A2A816459078203FC07F4CD201E8DE@adl01s055.ilcorp.gov.au> Why not change the recovery model of your database to 'SIMPLE', meaning minimal log space is used? I can't see that you'll need a point-in-time restore, so there's no point use FULL or BULK-LOGGED recovery models... -----Original Message----- From: John W. Colby [mailto:jwcolby at colbyconsulting.com] Sent: Tuesday, 7 September 2004 3:14 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] HELP - Log file growing rapidly I am importing this data into the server. The log file is growing rapidly. It is currently at 23g, where the 4 data files are around 13g each. Is this natural or is it growing this rapidly because I am importing simultaneously from the 4 different machines? With 144g of free space on the drive that the log file is on, I am in no danger of running out of room, certainly not before the current set of imports finish. However it appears that I do need to figure out how to backup the log files so that I can re-use the room for the next set of imports. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com IMPORTANT - PLEASE READ ******************** This email and any files transmitted with it are confidential and may contain information protected by law from disclosure. If you have received this message in error, please notify the sender immediately and delete this email from your system. No warranty is given that this email or files, if attached to this email, are free from computer viruses or other defects. They are provided on the basis the user assumes all responsibility for loss, damage or consequence resulting directly or indirectly from their use, whether caused by the negligence of the sender or not. From jwcolby at colbyconsulting.com Mon Sep 6 20:29:56 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 06 Sep 2004 21:29:56 -0400 Subject: [dba-SQLServer] HELP - Log file growing rapidly In-Reply-To: <59A61174B1F5B54B97FD4ADDE71E7D01011B60@ddi-01.DDI.local> Message-ID: <000101c4947a$322e42b0$80b3fea9@ColbyM6805> Michael, May you have better luck than I. Of course it really has nothing to do with luck so much as experience. I am not a SQL Server kinda guy so I am paying the price of brute force and ignorance. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Monday, September 06, 2004 8:51 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly My fav sql site... http://www.sql-server-performance.com/default.asp Maybe you'll find something here to help. I've never been 'lucky' enough to work with a db that size yet! I'm waiting for a client send send me a DVD with about 10mill rows on it so I've been following with interest ;-) cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Tuesday, 7 September 2004 8:12 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly Hi John: Here is a site with information specifically on SQL 2000 that you may not have already seen. There is some very useful information on bulk importing of data: http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/en- us/p art2/c0661.mspx (watch for wrap) There is a great section on warehousing and using the OLAP tools. I hate to be a pessimist but it seems even if you had an over-clock Cray mainframe, suspended in a bath of liquid nitrogen you could not have all the data translated by this week so you may have to seriously think of managing your data in chunks so you will have something to demonstrate to your client. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 1:50 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly Eric, Thanks for that detailed method of doing this stuff. I am working to figure out the backup piece as we speak. I had figured I would simply get all the data into the db then backup as soon as I finished, but I guess that simply isn't possible. I have a huge disk just for the log file, I don't much care if the log file grows huge as long as it works. When I first started this (two weeks ago) I was doing manual DTS operations to get each file into the db in sequence. I actually got about 11 of the files in (of 22 total files) but ran out of disk space - I was using (2) 200g drives Raid 0 for a 400g workspace and a single database file. The db file was 300g after 11 files so I could see that the rest of the data wasn't going to fit. I stopped, ordered a bunch of hardware, set up a dedicated machine, learned about multiple file databases, and set up 4 files plus a log file on 5 different 250g disks. It appears that WILL be enough space, probably even without backing up the log file. The log file 2 weeks ago (after 11 files) was only 11gb for a 300g data file so there is more to this than meets the eye. I now have a 250g drive dedicated to the log file for (4) data files which will (in the end) total by my estimations about 600 gb. The last few days have been agony simply because I was trying to speed things up and didn't know enough to do so. Problem after problem has caused me to waste the last two days. I have decided to just go back to my original method, and use DTS to import each file, one at a time, in sequence. That was working 2 weeks ago, and I PRAY it will work now. It will take me a full day to do it that way but then I have wasted TWO full days trying to do it a faster way. Ain't life grand? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Monday, September 06, 2004 4:24 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] HELP - Log file growing rapidly John, Here's what I had posted earlier to deal with the growing log file issues... Overview: You will need to restrict the growth of the transaction log file and then set up a backup schedule for the log file followed by a shrink log file operation. 1. Right click on the database and then go into Properties. 2. Click the Transaction Log tab 3. Make sure to restrict file growth to a manageable size where your drive won't run out disk space 4. Click Options tab 5. Check the Auto shrink checkbox. This will allow SQL server to automatically shrink your database and log files to the minumum required. 6. Click OK and apply all those changes. 7. Go into the Management section of EM. 8. Select Backup 9. Create a backup set for your log files. Select the db, check Transaction Log radio button. Select destination and check the schedule box and specify when you want it to run. Depending on the amount of activity and the maximum amount you specified for the log file to grow you will need to specify a scheduled backup and shrink operation that will suit your needs. On our production environment I had it set to 1 hr intervals. 10. Set up a scheduled shrink log file operation. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Monday, September 06, 2004 10:44 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] HELP - Log file growing rapidly I am importing this data into the server. The log file is growing rapidly. It is currently at 23g, where the 4 data files are around 13g each. Is this natural or is it growing this rapidly because I am importing simultaneously from the 4 different machines? With 144g of free space on the drive that the log file is on, I am in no danger of running out of room, certainly not before the current set of imports finish. However it appears that I do need to figure out how to backup the log files so that I can re-use the room for the next set of imports. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From michael at ddisolutions.com.au Mon Sep 6 21:11:53 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Tue, 7 Sep 2004 12:11:53 +1000 Subject: [dba-SQLServer] Cannot obtain a lock Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011B62@ddi-01.DDI.local> Not my area of expertise but... try in QA EXEC sp_lock and EXEC sp_who and KILL any blocking processes. Look up BOL for details on each statement. HTH Michael M Well, if it ain't one thing its another. NOW... The server has stopped processing and all the workstations are stopped as well. When I try to look at the server properties I get: "The SQL Server cannot obtain a lock resource at this time. Rerun your statement when there are fewer users or ask the system administrator to check the SQL Server lock and memory configuration". Unfortunately I can't check the server lock and memory configuration because I get an error... "can't obtain a lock resource..." I am trying to cancel one of the 4 machines import to see if that frees up enough resources to go look at the problem, but I have to wonder if the rollback can be done if it can't "get a lock resource". Is there anything that can be done here? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Tue Sep 7 14:12:56 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 7 Sep 2004 12:12:56 -0700 Subject: [dba-SQLServer] Cannot obtain a lock In-Reply-To: <59A61174B1F5B54B97FD4ADDE71E7D01011B62@ddi-01.DDI.local> References: <59A61174B1F5B54B97FD4ADDE71E7D01011B62@ddi-01.DDI.local> Message-ID: Did you solve this problem John? On Tue, 7 Sep 2004 12:11:53 +1000, Michael Maddison wrote: > Not my area of expertise but... > > try in QA > EXEC sp_lock > and > EXEC sp_who > > and KILL > any blocking processes. Look up BOL for details on each statement. > > HTH > > Michael M > > > > > Well, if it ain't one thing its another. > > NOW... The server has stopped processing and all the workstations are > stopped as well. When I try to look at the server properties I get: > > "The SQL Server cannot obtain a lock resource at this time. Rerun your > statement when there are fewer users or ask the system administrator to > check the SQL Server lock and memory configuration". > > Unfortunately I can't check the server lock and memory configuration > because I get an error... "can't obtain a lock resource..." > > I am trying to cancel one of the 4 machines import to see if that frees > up enough resources to go look at the problem, but I have to wonder if > the rollback can be done if it can't "get a lock resource". > > Is there anything that can be done here? > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From jwcolby at colbyconsulting.com Tue Sep 7 21:25:00 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 07 Sep 2004 22:25:00 -0400 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: Message-ID: <000d01c4954b$105a84b0$80b3fea9@ColbyM6805> I got my 8 port gigabit switch today. With that switch the entire dynamic of my SQL Server experience changes. Prior to this the fastest method of importing using DTS was to have the raw data files on a disk on the SQL Server and have the server machine itself load the data into the database. I would average about 40k records per minute in that configuration. My other new machine running DTS inserting records into the server didn't work much if any slower, but it also didn't work any faster. Now, with the gigabit switch, having the second new machine run DTS on files local to that machine dumping the data into the server is close to twice as fast as having the server doing all of the work. Furthermore, and I don't understand this, having the second new machine running DTS and simultaneously performing another task such as unzipping one of the raw text files used to slow the DTS to a crawl. With the gigabit switch the DTS slows down slightly, just barely noticeable) but the other task proceeds at full speed as well. Not to mention that I am also uploading all the data from my wife's laptop to a usb disk on the server in preparation to format / reinstall that machine. It sure sounds like I had a LAN bottleneck which I have opened way up. 8-) John W. Colby www.ColbyConsulting.com From accessd at shaw.ca Tue Sep 7 23:07:41 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Tue, 07 Sep 2004 21:07:41 -0700 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: <000d01c4954b$105a84b0$80b3fea9@ColbyM6805> Message-ID: Hi John: Just another thought about processing/importing your records You can ask BCP to commit every n records. This way, you can just keep truncating the log every few minutes, while BCP is still running. This allows one to load a humungous database using a relatively small log file. You may want to commit every 100,000 records or so. (A friend does this all the time when importing large amounts of data with little hard drive space.) The gigabit switch seems like the ticket...there is no substitute for horse-power. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Tuesday, September 07, 2004 7:25 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Gigabit switch I got my 8 port gigabit switch today. With that switch the entire dynamic of my SQL Server experience changes. Prior to this the fastest method of importing using DTS was to have the raw data files on a disk on the SQL Server and have the server machine itself load the data into the database. I would average about 40k records per minute in that configuration. My other new machine running DTS inserting records into the server didn't work much if any slower, but it also didn't work any faster. Now, with the gigabit switch, having the second new machine run DTS on files local to that machine dumping the data into the server is close to twice as fast as having the server doing all of the work. Furthermore, and I don't understand this, having the second new machine running DTS and simultaneously performing another task such as unzipping one of the raw text files used to slow the DTS to a crawl. With the gigabit switch the DTS slows down slightly, just barely noticeable) but the other task proceeds at full speed as well. Not to mention that I am also uploading all the data from my wife's laptop to a usb disk on the server in preparation to format / reinstall that machine. It sure sounds like I had a LAN bottleneck which I have opened way up. 8-) John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Tue Sep 7 23:50:59 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 08 Sep 2004 00:50:59 -0400 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: Message-ID: <001401c4955f$72aa6d60$80b3fea9@ColbyM6805> In fact I gave up on bcp because the data is surrounded by quotes. DTS strips them off but I can't for the life of me figure out how to have BCP do that. I'm almost done importing at this point. The imports are (crossing my fingers) proceeding smoothly, I am currently processing file 18 of 22, thus by tomorrow morning I should have them all in. Then I have to figure out backup. IMMEDIATELY! I do NOT want to lose this stuff and have to do it again. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Wednesday, September 08, 2004 12:08 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Gigabit switch Hi John: Just another thought about processing/importing your records You can ask BCP to commit every n records. This way, you can just keep truncating the log every few minutes, while BCP is still running. This allows one to load a humungous database using a relatively small log file. You may want to commit every 100,000 records or so. (A friend does this all the time when importing large amounts of data with little hard drive space.) The gigabit switch seems like the ticket...there is no substitute for horse-power. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Tuesday, September 07, 2004 7:25 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Gigabit switch I got my 8 port gigabit switch today. With that switch the entire dynamic of my SQL Server experience changes. Prior to this the fastest method of importing using DTS was to have the raw data files on a disk on the SQL Server and have the server machine itself load the data into the database. I would average about 40k records per minute in that configuration. My other new machine running DTS inserting records into the server didn't work much if any slower, but it also didn't work any faster. Now, with the gigabit switch, having the second new machine run DTS on files local to that machine dumping the data into the server is close to twice as fast as having the server doing all of the work. Furthermore, and I don't understand this, having the second new machine running DTS and simultaneously performing another task such as unzipping one of the raw text files used to slow the DTS to a crawl. With the gigabit switch the DTS slows down slightly, just barely noticeable) but the other task proceeds at full speed as well. Not to mention that I am also uploading all the data from my wife's laptop to a usb disk on the server in preparation to format / reinstall that machine. It sure sounds like I had a LAN bottleneck which I have opened way up. 8-) John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Wed Sep 8 10:49:14 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Wed, 8 Sep 2004 08:49:14 -0700 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: <001401c4955f$72aa6d60$80b3fea9@ColbyM6805> References: <001401c4955f$72aa6d60$80b3fea9@ColbyM6805> Message-ID: Out of curiosity, what brand switch did you get, and what device did you replace on your network. plus what brand nic's did you get for the pc's? On Wed, 08 Sep 2004 00:50:59 -0400, John W. Colby wrote: > In fact I gave up on bcp because the data is surrounded by quotes. DTS > strips them off but I can't for the life of me figure out how to have BCP do > that. I'm almost done importing at this point. The imports are (crossing > my fingers) proceeding smoothly, I am currently processing file 18 of 22, > thus by tomorrow morning I should have them all in. > > Then I have to figure out backup. IMMEDIATELY! I do NOT want to lose this > stuff and have to do it again. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim > Lawrence (AccessD) > Sent: Wednesday, September 08, 2004 12:08 AM > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] Gigabit switch > > Hi John: > > Just another thought about processing/importing your records > > You can ask BCP to commit every n records. This way, you can just keep > truncating the log every few minutes, while BCP is still running. This > allows one to load a humungous database using a relatively small log file. > You may want to commit every 100,000 records or so. (A friend does this all > the time when importing large amounts of data with little hard drive space.) > > The gigabit switch seems like the ticket...there is no substitute for > horse-power. > > HTH > Jim > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. > Colby > Sent: Tuesday, September 07, 2004 7:25 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Gigabit switch > > I got my 8 port gigabit switch today. With that switch the entire dynamic > of my SQL Server experience changes. Prior to this the fastest method of > importing using DTS was to have the raw data files on a disk on the SQL > Server and have the server machine itself load the data into the database. I > would average about 40k records per minute in that configuration. > > My other new machine running DTS inserting records into the server didn't > work much if any slower, but it also didn't work any faster. Now, with the > gigabit switch, having the second new machine run DTS on files local to that > machine dumping the data into the server is close to twice as fast as having > the server doing all of the work. Furthermore, and I don't understand this, > having the second new machine running DTS and simultaneously performing > another task such as unzipping one of the raw text files used to slow the > DTS to a crawl. With the gigabit switch the DTS slows down slightly, just > barely noticeable) but the other task proceeds at full speed as well. > > Not to mention that I am also uploading all the data from my wife's laptop > to a usb disk on the server in preparation to format / reinstall that > machine. > > It sure sounds like I had a LAN bottleneck which I have opened way up. > > 8-) -- -Francisco From kens.programming at verizon.net Wed Sep 8 12:21:26 2004 From: kens.programming at verizon.net (Ken Stoker) Date: Wed, 8 Sep 2004 10:21:26 -0700 Subject: [dba-SQLServer] Referencing Another Database from within a different current database connection. Message-ID: <20040908171905.PRLB14580.out011.verizon.net@enterprise> Everyone, I am trying to copy data from one database into another database within my single SQL Server instance. I have qualified my table names with databasename.dbo.tablename in the following script, but SQL Server gives me the following error: The column prefix 'Conflict.dbo' does not match with a table name or alias name used in the query. Here is the script. Can anyone tell me what I am doing wrong? INSERT INTO 90604.dbo.CodeTypes(CodeTypeID, Description, CreateUser, CreateDate, UpdateUser, UpdateDate) SELECT CodeTypeID, Description, CreateUser, CreateDate, UpdateUser, UpdateDate FROM Conflict.dbo.CodeTypes WHERE Conflict.dbo.CodeTypes NOT IN (SELECT DISTINCT CodeTypeID FROM 90604.dbo.Codes) I have done this before and I went back and reviewed the script I used, but everything looks the same to me. The only difference with the previous script was that I was copying data from one server to another server, so I had server name before the database name. I tried putting the server name on the front also but that didn't change anything. I also tried taking out 'dbo' and having databasename..tablename but it also didn't change anything. Thanks Ken From jwcolby at colbyconsulting.com Wed Sep 8 14:39:49 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 08 Sep 2004 15:39:49 -0400 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: Message-ID: <002901c495db$9e390550$80b3fea9@ColbyM6805> Francis, The switch is a d-link dgs-1008d. I didn't actually replace anything. I had a pair of wireless routers, one running 56mb only and one running 11mb only. I left them in place. The dlink di-624 talks directly to the cable modem, then directly feeds my tivo upstairs on a cable snaked through the wall, plus a cable over to the switch (used to go to the 11mb router a netgear mr814v2). I moved the switch to directly behind the dlink and then fed the rest of the system off of that. Three computers go directly into that switch, my two new machines which have a built in 1gb nic, and my old machgine which has a 100mb nic. No se what brand. My old (now my wife's) laptop runs wireless 11mb and my new laptop runs 56mb, each on different routers. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Wednesday, September 08, 2004 11:49 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Gigabit switch Out of curiosity, what brand switch did you get, and what device did you replace on your network. plus what brand nic's did you get for the pc's? On Wed, 08 Sep 2004 00:50:59 -0400, John W. Colby wrote: > In fact I gave up on bcp because the data is surrounded by quotes. > DTS strips them off but I can't for the life of me figure out how to > have BCP do that. I'm almost done importing at this point. The > imports are (crossing my fingers) proceeding smoothly, I am currently > processing file 18 of 22, thus by tomorrow morning I should have them > all in. > > Then I have to figure out backup. IMMEDIATELY! I do NOT want to lose > this stuff and have to do it again. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim > Lawrence (AccessD) > Sent: Wednesday, September 08, 2004 12:08 AM > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] Gigabit switch > > Hi John: > > Just another thought about processing/importing your records > > You can ask BCP to commit every n records. This way, you can just > keep truncating the log every few minutes, while BCP is still running. > This allows one to load a humungous database using a relatively small > log file. You may want to commit every 100,000 records or so. (A > friend does this all the time when importing large amounts of data > with little hard drive space.) > > The gigabit switch seems like the ticket...there is no substitute for > horse-power. > > HTH > Jim > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John > W. Colby > Sent: Tuesday, September 07, 2004 7:25 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Gigabit switch > > I got my 8 port gigabit switch today. With that switch the entire > dynamic of my SQL Server experience changes. Prior to this the > fastest method of importing using DTS was to have the raw data files > on a disk on the SQL Server and have the server machine itself load > the data into the database. I would average about 40k records per > minute in that configuration. > > My other new machine running DTS inserting records into the server > didn't work much if any slower, but it also didn't work any faster. > Now, with the gigabit switch, having the second new machine run DTS on > files local to that machine dumping the data into the server is close > to twice as fast as having the server doing all of the work. > Furthermore, and I don't understand this, having the second new > machine running DTS and simultaneously performing another task such as > unzipping one of the raw text files used to slow the DTS to a crawl. > With the gigabit switch the DTS slows down slightly, just barely > noticeable) but the other task proceeds at full speed as well. > > Not to mention that I am also uploading all the data from my wife's > laptop to a usb disk on the server in preparation to format / > reinstall that machine. > > It sure sounds like I had a LAN bottleneck which I have opened way up. > > 8-) -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From serbach at new.rr.com Wed Sep 8 13:22:10 2004 From: serbach at new.rr.com (Steven W. Erbach) Date: Wed, 8 Sep 2004 13:22:10 -0500 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: <000d01c4954b$105a84b0$80b3fea9@ColbyM6805> References: <000d01c4954b$105a84b0$80b3fea9@ColbyM6805> Message-ID: <20040908132210.808347837.serbach@new.rr.com> John, What kind of gigabit switch did you get? Is all your wiring Cat 6 or 5e? I presume that you had gigabit cards in your servers and workstations already? Regards, Steve Erbach Scientific Marketing Neenah, WI 920-969-0504 Message created with Bloomba Disclaimer: No tree was killed in the transmission of this message. However, several coulombs of electrons were temporarily inconvenienced. From fhtapia at gmail.com Wed Sep 8 15:27:43 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Wed, 8 Sep 2004 13:27:43 -0700 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: <002901c495db$9e390550$80b3fea9@ColbyM6805> References: <002901c495db$9e390550$80b3fea9@ColbyM6805> Message-ID: ah, well that makes a lot of sense. On Wed, 08 Sep 2004 15:39:49 -0400, John W. Colby wrote: > Francisco, > > The switch is a d-link dgs-1008d. I didn't actually replace anything. I > had a pair of wireless routers, one running 56mb only and one running 11mb > only. I left them in place. The dlink di-624 talks directly to the cable > modem, then directly feeds my tivo upstairs on a cable snaked through the > wall, plus a cable over to the switch (used to go to the 11mb router a > netgear mr814v2). I moved the switch to directly behind the dlink and then > fed the rest of the system off of that. Three computers go directly into > that switch, my two new machines which have a built in 1gb nic, and my old > machgine which has a 100mb nic. No se what brand. My old (now my wife's) > laptop runs wireless 11mb and my new laptop runs 56mb, each on different > routers. -- -Francisco From jwcolby at colbyconsulting.com Wed Sep 8 17:08:28 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 08 Sep 2004 18:08:28 -0400 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: <20040908132210.808347837.serbach@new.rr.com> Message-ID: <002c01c495f0$623d7120$80b3fea9@ColbyM6805> My wiring is all Cat5 (it was for 100mbit). My two new machines came with gbit nics on the motherboard. My other two workstations have their old 100mbit nics. My laptops are wireless. My new laptop has a 10/100 nic and 11g wireless (54mbit?). John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Steven W. Erbach Sent: Wednesday, September 08, 2004 2:22 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Gigabit switch John, What kind of gigabit switch did you get? Is all your wiring Cat 6 or 5e? I presume that you had gigabit cards in your servers and workstations already? Regards, Steve Erbach Scientific Marketing Neenah, WI 920-969-0504 Message created with Bloomba Disclaimer: No tree was killed in the transmission of this message. However, several coulombs of electrons were temporarily inconvenienced. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Wed Sep 8 22:13:17 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 08 Sep 2004 23:13:17 -0400 Subject: [dba-SQLServer] Gigabit switch In-Reply-To: <001401c4955f$72aa6d60$80b3fea9@ColbyM6805> Message-ID: <003c01c4961a$f4dd81d0$80b3fea9@ColbyM6805> The last file imported early this morning so the data is in! Next question, do I need to leave logging off (it's set to bulk actually) until I add the Identity PK? How does logging work in a case like that? Does SQL Server add a log entry for each record that it adds the value into? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Wednesday, September 08, 2004 12:51 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Gigabit switch In fact I gave up on bcp because the data is surrounded by quotes. DTS strips them off but I can't for the life of me figure out how to have BCP do that. I'm almost done importing at this point. The imports are (crossing my fingers) proceeding smoothly, I am currently processing file 18 of 22, thus by tomorrow morning I should have them all in. Then I have to figure out backup. IMMEDIATELY! I do NOT want to lose this stuff and have to do it again. John W. Colby www.ColbyConsulting.com From andrew.haslett at ilc.gov.au Wed Sep 8 22:34:37 2004 From: andrew.haslett at ilc.gov.au (Haslett, Andrew) Date: Thu, 9 Sep 2004 13:04:37 +0930 Subject: [dba-SQLServer] Gigabit switch Message-ID: <0A870603A2A816459078203FC07F4CD201E90E@adl01s055.ilcorp.gov.au> It depends on what type of recovery requirements you have: * If you wish to restore to any 'point in time' then you should use the FULL recovery model * If you wish to restore to any 'point in time' except those transactions that are not fully logged using BULK Recovery mode, use the BULK-LOGGED recovery model * If you wish to restore only to the times when you've run a Full Database Backup, use the SIMPLE Recovery model. If not already done so, I suggest downloading and reading the applicable sections in 'Books On Line'. It will explain it better than most of us can (for once its actually a pretty decent Microsoft Reference!) Recovery models & logging is an important aspect be understood in SS - It would have decreased your import times if you'd used the SIMPLE recovery model instead of BULK-LOGGED for example... Cheers, A -----Original Message----- From: John W. Colby [mailto:jwcolby at colbyconsulting.com] Sent: Thursday, 9 September 2004 12:43 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Gigabit switch The last file imported early this morning so the data is in! Next question, do I need to leave logging off (it's set to bulk actually) until I add the Identity PK? How does logging work in a case like that? Does SQL Server add a log entry for each record that it adds the value into? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Wednesday, September 08, 2004 12:51 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Gigabit switch In fact I gave up on bcp because the data is surrounded by quotes. DTS strips them off but I can't for the life of me figure out how to have BCP do that. I'm almost done importing at this point. The imports are (crossing my fingers) proceeding smoothly, I am currently processing file 18 of 22, thus by tomorrow morning I should have them all in. Then I have to figure out backup. IMMEDIATELY! I do NOT want to lose this stuff and have to do it again. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com IMPORTANT - PLEASE READ ******************** This email and any files transmitted with it are confidential and may contain information protected by law from disclosure. If you have received this message in error, please notify the sender immediately and delete this email from your system. No warranty is given that this email or files, if attached to this email, are free from computer viruses or other defects. They are provided on the basis the user assumes all responsibility for loss, damage or consequence resulting directly or indirectly from their use, whether caused by the negligence of the sender or not. From tuxedo_man at hotmail.com Thu Sep 9 02:32:58 2004 From: tuxedo_man at hotmail.com (Billy Pang) Date: Thu, 09 Sep 2004 07:32:58 +0000 Subject: [dba-SQLServer] Referencing Another Database from within a differentcurrent database connection. Message-ID: the source of problem is probably caused by "Conflict.dbo.CodeTypes" column. the database thinks that conflict.dbo is a table alias, which it is not. remove the "conflict.dbo." and it should work...(ie... WHERE CodeTypes NOT ...) HTH Billy >From: "Ken Stoker" >Reply-To: dba-sqlserver at databaseadvisors.com >To: >Subject: [dba-SQLServer] Referencing Another Database from within a >differentcurrent database connection. >Date: Wed, 8 Sep 2004 10:21:26 -0700 > >Everyone, > > > >I am trying to copy data from one database into another database within my >single SQL Server instance. I have qualified my table names with > > > > databasename.dbo.tablename > > > > > >in the following script, but SQL Server gives me the following error: > > > > The column prefix 'Conflict.dbo' does not match with a table >name or alias name used in the query. > > > >Here is the script. Can anyone tell me what I am doing wrong? > > > >INSERT INTO 90604.dbo.CodeTypes(CodeTypeID, Description, CreateUser, >CreateDate, UpdateUser, UpdateDate) > >SELECT CodeTypeID, Description, CreateUser, CreateDate, UpdateUser, >UpdateDate > >FROM Conflict.dbo.CodeTypes > >WHERE Conflict.dbo.CodeTypes NOT IN (SELECT DISTINCT CodeTypeID FROM >90604.dbo.Codes) > > > >I have done this before and I went back and reviewed the script I used, but >everything looks the same to me. The only difference with the previous >script was that I was copying data from one server to another server, so I >had server name before the database name. I tried putting the server name >on the front also but that didn't change anything. I also tried taking out >'dbo' and having > > > > databasename..tablename > > > >but it also didn't change anything. > > > >Thanks > > > >Ken > > > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > _________________________________________________________________ Powerful Parental Controls Let your child discover the best the Internet has to offer. http://join.msn.com/?pgmarket=en-ca&page=byoa/prem&xAPID=1994&DI=1034&SU=http://hotmail.com/enca&HL=Market_MSNIS_Taglines Start enjoying all the benefits of MSN? Premium right now and get the first two months FREE*. From jwcolby at colbyconsulting.com Thu Sep 9 12:41:13 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Thu, 09 Sep 2004 13:41:13 -0400 Subject: [dba-SQLServer] SQL Server Backup In-Reply-To: <0A870603A2A816459078203FC07F4CD201E90E@adl01s055.ilcorp.gov.au> Message-ID: <004801c49694$37ff7030$80b3fea9@ColbyM6805> Can anyone explain to me what SQL Server does when it backs up. I have this largish database that I need to backup and I need to know if I need the same amount of room for the backup as is used for the db itself, if compression is used, how much it helps etc. John W. Colby www.ColbyConsulting.com From chizotz at mchsi.com Thu Sep 9 13:35:45 2004 From: chizotz at mchsi.com (chizotz at mchsi.com) Date: Thu, 09 Sep 2004 18:35:45 +0000 Subject: [dba-SQLServer] Global Variables in DTS Message-ID: <090920041835.25572.6ace@mchsi.com> I have a complex DTS package that pulls data out an Oracle production database, sums and otherwise manipulates it, and stores it in a SQL Server 2K database for reporting and further use. The DTS package is supposed to be run daily, to pull data from the previous day's entry into the production database. I have created a global variable in DTS, then use an Execute SQL to set the value of the variable to the max date already in the data + 1 day. This is not in production yet, but during my testing everything (apparently) worked fine for dates prior to 9/6/04. Even though the production database definitely has data past 9/6/04, the DTS package can not seem to pull it. I've been tearing some hair out trying to figure this out, until I remembered that in order to get DTS to accept a global variable of type Date, I had to enter a default value. I went back and checked, and sure enough the value I entered was 9/6/04. That can't be coincidence. I've tried blanking out the value of the global variable, but DTS doesn't like that and complains about cannot convert from type BSTR to Date. I've tried replacing the value with a date some years in the future, but that also doesn't seem to work, and even though it appears to have saved the new value after running the package the value has reverted to 9/6/04. What's the trick to this that I'm missing??? Thanks for any help, Ron From fhtapia at gmail.com Thu Sep 9 15:20:24 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 9 Sep 2004 13:20:24 -0700 Subject: [dba-SQLServer] SQL Server Backup In-Reply-To: <004801c49694$37ff7030$80b3fea9@ColbyM6805> References: <0A870603A2A816459078203FC07F4CD201E90E@adl01s055.ilcorp.gov.au> <004801c49694$37ff7030$80b3fea9@ColbyM6805> Message-ID: my experiance is that the backup file is about 1:1 to the db size. You can however backup to TAPE, that is fully supported by SQL Server. I understand your concerns for database backups.. I believe Light Speed has a product that backups and compresses and works inline w/ SQL Server... here is their URL http://www.imceda.com/default.asp?LeadSource=SQLServerCentralSiteSponsorship On Thu, 09 Sep 2004 13:41:13 -0400, John W. Colby wrote: > Can anyone explain to me what SQL Server does when it backs up. I have this > largish database that I need to backup and I need to know if I need the same > amount of room for the backup as is used for the db itself, if compression > is used, how much it helps etc. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From chizotz at mchsi.com Thu Sep 9 17:05:05 2004 From: chizotz at mchsi.com (chizotz at mchsi.com) Date: Thu, 09 Sep 2004 22:05:05 +0000 Subject: [dba-SQLServer] Fail a DTS Step Based on Query Result? Message-ID: <090920042205.24968.7c36@mchsi.com> Is it possible to fail a step in a DTS package delibertaely, based on a value returned from a query? I would like to branch differently depending on the results of a SQL task, but the only options are on completion, on success, or on failure. I don't see a way to do this, but it sure would be convenient. Thanks, Ron From John.Maxwell2 at ntl.com Thu Sep 9 18:23:07 2004 From: John.Maxwell2 at ntl.com (John Maxwell @ London City) Date: Fri, 10 Sep 2004 00:23:07 +0100 Subject: [dba-SQLServer] Fail a DTS Step Based on Query Result? Message-ID: Hello Ron, new to sql server so happy to be corrected but I would use the workflow properties and an active x script. 1)Add Active x Script task 2)Right Click on icon, select workflow - workflow properties 3)Select Options tab 4)Click Use ActiveXscript 5)Click propeties 6)Write script to check query result and determine success failure Afraid I can't send script as mail sweeper here blocks it However BOL, under using active X script in DTS gives an example which you could adapt. hope this helps john -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of chizotz at mchsi.com Sent: 09 September 2004 23:05 To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Fail a DTS Step Based on Query Result? Is it possible to fail a step in a DTS package delibertaely, based on a value returned from a query? I would like to branch differently depending on the results of a SQL task, but the only options are on completion, on success, or on failure. I don't see a way to do this, but it sure would be convenient. Thanks, Ron _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com The contents of this email and any attachments are sent for the personal attention of the addressee(s) only and may be confidential. If you are not the intended addressee, any use, disclosure or copying of this email and any attachments is unauthorised - please notify the sender by return and delete the message. Any representations or commitments expressed in this email are subject to contract. ntl Group Limited From jwcolby at colbyconsulting.com Thu Sep 9 20:52:50 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Thu, 09 Sep 2004 21:52:50 -0400 Subject: [dba-SQLServer] Can I abort EM job? In-Reply-To: <004801c49694$37ff7030$80b3fea9@ColbyM6805> Message-ID: <004b01c496d8$e43f6a00$80b3fea9@ColbyM6805> Well... I turned logging back on (bad move) then tried to add an identity field to my table. Two days later... The log file has run out of room (200g) the data files are about to run out of room etc. It gave me a warning that the original table was built without ANSI Null or some such and that it was going to build a new table with that turned on. So... Is there any halting the process? If I do will it roll back (for two damn days)? Am I screwed and just have to load the data all over again? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Thursday, September 09, 2004 1:41 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Backup Can anyone explain to me what SQL Server does when it backs up. I have this largish database that I need to backup and I need to know if I need the same amount of room for the backup as is used for the db itself, if compression is used, how much it helps etc. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From chizotz at mchsi.com Thu Sep 9 21:50:04 2004 From: chizotz at mchsi.com (Ron Allen) Date: Thu, 9 Sep 2004 21:50:04 -0500 Subject: [dba-SQLServer] Fail a DTS Step Based on Query Result? In-Reply-To: References: Message-ID: <895967245.20040909215004@mchsi.com> Hello John, Thanks. I couldn't find an example this afternoon, but I'll try again tomorrow. Ron Thursday, September 9, 2004, 6:23:07 PM, you wrote: JMLC> Hello Ron, JMLC> new to sql server so happy to be corrected JMLC> but I would use the workflow properties and an active x script. JMLC> 1)Add Active x Script task JMLC> 2)Right Click on icon, select workflow - workflow properties JMLC> 3)Select Options tab JMLC> 4)Click Use ActiveXscript JMLC> 5)Click propeties JMLC> 6)Write script to check query result and determine success failure JMLC> Afraid I can't send script as mail sweeper here blocks it JMLC> However BOL, under using active X script in DTS gives an example which you JMLC> could adapt. JMLC> hope this helps JMLC> john JMLC> -----Original Message----- JMLC> From: dba-sqlserver-bounces at databaseadvisors.com JMLC> [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of JMLC> chizotz at mchsi.com JMLC> Sent: 09 September 2004 23:05 JMLC> To: dba-sqlserver at databaseadvisors.com JMLC> Subject: [dba-SQLServer] Fail a DTS Step Based on Query Result? JMLC> Is it possible to fail a step in a DTS package delibertaely, based on a JMLC> value JMLC> returned from a query? I would like to branch differently depending on the JMLC> results of a SQL task, but the only options are on completion, on success, JMLC> or JMLC> on failure. I don't see a way to do this, but it sure would be convenient. JMLC> Thanks, JMLC> Ron From artful at rogers.com Fri Sep 10 00:04:12 2004 From: artful at rogers.com (Arthur Fuller) Date: Fri, 10 Sep 2004 01:04:12 -0400 Subject: [dba-SQLServer] Can I abort EM job? In-Reply-To: <004b01c496d8$e43f6a00$80b3fea9@ColbyM6805> Message-ID: <000401c496f3$9d151380$6501a8c0@rock> One thing that you absolutely must check as soon as possible is the data type of all your columns. Any Ntext fields should be converted to text, since this data is not for consumption in 2-byte countries. You always need twice or thrice the space that any given SQL job will consume. That's a basic rule of thumb. Don't shoot the messenger. I hope the client is paying you a lot for this gig, JC, because you're soon going to be as hairless as Andre Agassi. A. -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Thursday, September 09, 2004 9:53 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Can I abort EM job? Well... I turned logging back on (bad move) then tried to add an identity field to my table. Two days later... The log file has run out of room (200g) the data files are about to run out of room etc. It gave me a warning that the original table was built without ANSI Null or some such and that it was going to build a new table with that turned on. So... Is there any halting the process? If I do will it roll back (for two damn days)? Am I screwed and just have to load the data all over again? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Thursday, September 09, 2004 1:41 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Server Backup Can anyone explain to me what SQL Server does when it backs up. I have this largish database that I need to backup and I need to know if I need the same amount of room for the backup as is used for the db itself, if compression is used, how much it helps etc. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Fri Sep 10 12:56:18 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 10 Sep 2004 10:56:18 -0700 Subject: [dba-SQLServer] Can I abort EM job? In-Reply-To: <000401c496f3$9d151380$6501a8c0@rock> References: <004b01c496d8$e43f6a00$80b3fea9@ColbyM6805> <000401c496f3$9d151380$6501a8c0@rock> Message-ID: aborting the job now won't cause you to loose data, but it will cause additional delays as the log files have to be used to restore the database to before you ran your job. On Fri, 10 Sep 2004 01:04:12 -0400, Arthur Fuller wrote: > One thing that you absolutely must check as soon as possible is the data > type of all your columns. Any Ntext fields should be converted to text, > since this data is not for consumption in 2-byte countries. > > You always need twice or thrice the space that any given SQL job will > consume. That's a basic rule of thumb. Don't shoot the messenger. > > I hope the client is paying you a lot for this gig, JC, because you're > soon going to be as hairless as Andre Agassi. > > A. > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. > Colby > Sent: Thursday, September 09, 2004 9:53 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Can I abort EM job? > > Well... > > I turned logging back on (bad move) then tried to add an identity field > to my table. Two days later... The log file has run out of room (200g) > the data files are about to run out of room etc. It gave me a warning > that the original table was built without ANSI Null or some such and > that it was going to build a new table with that turned on. > > So... Is there any halting the process? If I do will it roll back (for > two damn days)? Am I screwed and just have to load the data all over > again? > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. > Colby > Sent: Thursday, September 09, 2004 1:41 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] SQL Server Backup > > Can anyone explain to me what SQL Server does when it backs up. I have > this largish database that I need to backup and I need to know if I need > the same amount of room for the backup as is used for the db itself, if > compression is used, how much it helps etc. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From Jeff at OUTBAKTech.com Fri Sep 10 15:17:13 2004 From: Jeff at OUTBAKTech.com (Jeff Barrows) Date: Fri, 10 Sep 2004 15:17:13 -0500 Subject: [dba-SQLServer] SQL Security question Message-ID: <8DA8776D2F418E46A2A464AC6CE630500326B8@outbaksrv1.outbaktech.com> Is it possible to add a user to SQL Server who is connecting from a different domain? I have a potential new client who needs to connect to their SQL data remotely, but their laptop is not part of the company domain. TIA Jeff B From CMackin at Quiznos.com Fri Sep 10 16:30:03 2004 From: CMackin at Quiznos.com (Mackin, Christopher) Date: Fri, 10 Sep 2004 15:30:03 -0600 Subject: [dba-SQLServer] SQL Security question Message-ID: Can you just give them a SQL Login instead of using Windows Authentication? -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Jeff Barrows Sent: Friday, September 10, 2004 2:17 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] SQL Security question Is it possible to add a user to SQL Server who is connecting from a different domain? I have a potential new client who needs to connect to their SQL data remotely, but their laptop is not part of the company domain. TIA Jeff B _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Fri Sep 10 16:53:38 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 10 Sep 2004 14:53:38 -0700 Subject: [dba-SQLServer] SQL Security question In-Reply-To: References: Message-ID: Yes. Your server should be conigured to also support TCP/IP connections and all your user needs is the IP, and his UserID and Password to the SqlServer On Fri, 10 Sep 2004 15:30:03 -0600, Mackin, Christopher wrote: > Can you just give them a SQL Login instead of using Windows Authentication? > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Jeff > Barrows > Sent: Friday, September 10, 2004 2:17 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] SQL Security question > > Is it possible to add a user to SQL Server who is connecting from a different domain? I have a potential new client who needs to connect to their SQL data remotely, but their laptop is not part of the company domain. > > TIA > > Jeff B > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From PaulR at Zslinc.com Fri Sep 10 17:55:26 2004 From: PaulR at Zslinc.com (PaulR) Date: Fri, 10 Sep 2004 18:55:26 -0400 Subject: [dba-SQLServer] Linked Server Problem for Connecting to Oracle Message-ID: <200409102255.i8AMtQdZ018511@mail713.megamailservers.com> Hi, Please help on this, thanks in advance, I am having very hard time to get the connecting between SQLServer 2000 and Oracle 8.1.7 using Linked Server, always I am getting the same error message . Error: 7933: OLE DB Provider 'MSDAORA' reported an Error, The Error Occurred while connecting to oracle from SQLServer 2000 in WINDOWS 2003, I have tried with MDAC 2.8, MDAC 2.7 and restarted the Server, nothing is working. We are always getting the Same Error Error 7399: OLE DB Provider 'MSDAORA' Reported an error. OLE DB error trace [OLE/DB Provider 'MSDAORA' IDBInitialize::Initialize returned 0x80004005: ]. But I tried with one of the Windows 2000 machine, its working fine, Please help... Please help thanks Paul From jwcolby at colbyconsulting.com Fri Sep 10 19:31:22 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Fri, 10 Sep 2004 20:31:22 -0400 Subject: [dba-SQLServer] Can I abort EM job? In-Reply-To: Message-ID: <000501c49796$adab9bd0$e8dafea9@ColbyM6805> Francisco, >but it will cause additional delays as the log files have to be used to restore the database to before you ran your job. If I had any indication of where in the process it was I might be tempted to let it run. However how do I know it won't take 2 more days? Or weeks? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Friday, September 10, 2004 1:56 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Can I abort EM job? aborting the job now won't cause you to loose data, but it will cause additional delays as the log files have to be used to restore the database to before you ran your job. From jwcolby at colbyconsulting.com Fri Sep 10 22:24:46 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Fri, 10 Sep 2004 23:24:46 -0400 Subject: [dba-SQLServer] What determines default In-Reply-To: Message-ID: <000001c497ae$e3a8ca10$e8dafea9@ColbyM6805> I am trying to import one a small subset of one of my import files into a new table. The import fails "not enough storage". When I look at the table definition (it creates the table, but fails on import) the field defs are varchar 8000. The other machine was bringing them in as nvarchar 50. I have to believe that this is the problem. How do I cause the table build to use nvarchar 50 instead of varchar 8000? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Friday, September 10, 2004 1:56 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Can I abort EM job? aborting the job now won't cause you to loose data, but it will cause additional delays as the log files have to be used to restore the database to before you ran your job. On Fri, 10 Sep 2004 01:04:12 -0400, Arthur Fuller wrote: > One thing that you absolutely must check as soon as possible is the > data type of all your columns. Any Ntext fields should be converted to > text, since this data is not for consumption in 2-byte countries. > > You always need twice or thrice the space that any given SQL job will > consume. That's a basic rule of thumb. Don't shoot the messenger. > > I hope the client is paying you a lot for this gig, JC, because you're > soon going to be as hairless as Andre Agassi. > > A. > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John > W. Colby > Sent: Thursday, September 09, 2004 9:53 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Can I abort EM job? > > Well... > > I turned logging back on (bad move) then tried to add an identity > field to my table. Two days later... The log file has run out of room > (200g) the data files are about to run out of room etc. It gave me a > warning that the original table was built without ANSI Null or some > such and that it was going to build a new table with that turned on. > > So... Is there any halting the process? If I do will it roll back > (for two damn days)? Am I screwed and just have to load the data all > over again? > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John > W. Colby > Sent: Thursday, September 09, 2004 1:41 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] SQL Server Backup > > Can anyone explain to me what SQL Server does when it backs up. I > have this largish database that I need to backup and I need to know if > I need the same amount of room for the backup as is used for the db > itself, if compression is used, how much it helps etc. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From prosoft6 at hotmail.com Fri Sep 10 09:01:21 2004 From: prosoft6 at hotmail.com (Julie Reardon-Taylor) Date: Fri, 10 Sep 2004 10:01:21 -0400 Subject: [dba-SQLServer] SQL Server Backup Message-ID: John, I use Veritas 9.0 with Sql Server module as well as their Intelligent Disaster Recovery. Both products have been very reliable. We are backing up to tape. Easy to set-up and easy to schedule. It allows for transaction log backups as well as full database backups. They will let you download a free evaluation copy-works for 30 days. Julie Reardon-Taylor PRO-SOFT OF NY, INC. 108 Franklin Street Watertown, NY 13601 (315) 785-0319 www.pro-soft.net From mmaddison at optusnet.com.au Sat Sep 11 04:29:35 2004 From: mmaddison at optusnet.com.au (Michael Maddison) Date: Sat, 11 Sep 2004 19:29:35 +1000 Subject: [dba-SQLServer] What determines default In-Reply-To: <000001c497ae$e3a8ca10$e8dafea9@ColbyM6805> Message-ID: I presume you are using DTS? You can edit the create table script when setting the destination. You might want to copy it to QA and a replace on the 8000 cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, 11 September 2004 1:25 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] What determines default I am trying to import one a small subset of one of my import files into a new table. The import fails "not enough storage". When I look at the table definition (it creates the table, but fails on import) the field defs are varchar 8000. The other machine was bringing them in as nvarchar 50. I have to believe that this is the problem. How do I cause the table build to use nvarchar 50 instead of varchar 8000? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Friday, September 10, 2004 1:56 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Can I abort EM job? aborting the job now won't cause you to loose data, but it will cause additional delays as the log files have to be used to restore the database to before you ran your job. On Fri, 10 Sep 2004 01:04:12 -0400, Arthur Fuller wrote: > One thing that you absolutely must check as soon as possible is the > data type of all your columns. Any Ntext fields should be converted to > text, since this data is not for consumption in 2-byte countries. > > You always need twice or thrice the space that any given SQL job will > consume. That's a basic rule of thumb. Don't shoot the messenger. > > I hope the client is paying you a lot for this gig, JC, because you're > soon going to be as hairless as Andre Agassi. > > A. > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John > W. Colby > Sent: Thursday, September 09, 2004 9:53 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Can I abort EM job? > > Well... > > I turned logging back on (bad move) then tried to add an identity > field to my table. Two days later... The log file has run out of room > (200g) the data files are about to run out of room etc. It gave me a > warning that the original table was built without ANSI Null or some > such and that it was going to build a new table with that turned on. > > So... Is there any halting the process? If I do will it roll back > (for two damn days)? Am I screwed and just have to load the data all > over again? > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John > W. Colby > Sent: Thursday, September 09, 2004 1:41 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] SQL Server Backup > > Can anyone explain to me what SQL Server does when it backs up. I > have this largish database that I need to backup and I need to know if > I need the same amount of room for the backup as is used for the db > itself, if compression is used, how much it helps etc. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com --- Incoming mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.718 / Virus Database: 474 - Release Date: 9/07/2004 --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.718 / Virus Database: 474 - Release Date: 9/07/2004 From jwcolby at colbyconsulting.com Sat Sep 11 06:25:17 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 11 Sep 2004 07:25:17 -0400 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: Message-ID: <000101c497f2$06f732c0$e8dafea9@ColbyM6805> I have this mongo database. Regardless of what I do it seems, SQL Server takes FOREVER to do anything. To the point where it appears that it is locked up. Is there ANY way to get EM to display a status of what it is doing so that I can even know that is doing something at all? Is there any way to break EM when it appears to be hung. This thing is just unworkable as it is. I will try to do something (anything it seems) and EM locks the entire damn machine up. No incrementing status, nothing except the busy cursor. I have built a smaller set of "just" 3 million records. It does the same thing. I need to get work done on this thing, set up indexes on fields, do data cleanup. It simply isn't going to work if everything I do locks the machine up for a week. I have to believe I have some setting(s) wrong for SQL Server itself. My hardware is an Athlon64 3.0ghz with 2.5g RAM. The machine itself is very close to as fast as you are going to get on a desktop machine. Everyone says "3 million records is nothing to SQL Server" but you couldn't prove it by me. Can anyone help me troubleshoot this thing and figure out what I am doing wrong? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Julie Reardon-Taylor Sent: Friday, September 10, 2004 10:01 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] SQL Server Backup John, I use Veritas 9.0 with Sql Server module as well as their Intelligent Disaster Recovery. Both products have been very reliable. We are backing up to tape. Easy to set-up and easy to schedule. It allows for transaction log backups as well as full database backups. They will let you download a free evaluation copy-works for 30 days. Julie Reardon-Taylor PRO-SOFT OF NY, INC. 108 Franklin Street Watertown, NY 13601 (315) 785-0319 www.pro-soft.net _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From mwp.reid at qub.ac.uk Sat Sep 11 06:58:48 2004 From: mwp.reid at qub.ac.uk (Martin Reid) Date: Sat, 11 Sep 2004 12:58:48 +0100 Subject: [dba-SQLServer] SQL Server is hopelessly slow References: <000101c497f2$06f732c0$e8dafea9@ColbyM6805> Message-ID: <000a01c497f6$b344b530$0100a8c0@Martin> John How about using the command line tools and see if that speed up operations. That was one of the recommendations from MS when working with large dbs., Not to use EM at all. Martin ----- Original Message ----- From: "John W. Colby" To: Sent: Saturday, September 11, 2004 12:25 PM Subject: [dba-SQLServer] SQL Server is hopelessly slow >I have this mongo database. Regardless of what I do it seems, SQL Server > takes FOREVER to do anything. To the point where it appears that it is > locked up. Is there ANY way to get EM to display a status of what it is > doing so that I can even know that is doing something at all? Is there > any > way to break EM when it appears to be hung. This thing is just unworkable > as it is. I will try to do something (anything it seems) and EM locks the > entire damn machine up. No incrementing status, nothing except the busy > cursor. > > I have built a smaller set of "just" 3 million records. It does the same > thing. I need to get work done on this thing, set up indexes on fields, > do > data cleanup. It simply isn't going to work if everything I do locks the > machine up for a week. > > I have to believe I have some setting(s) wrong for SQL Server itself. My > hardware is an Athlon64 3.0ghz with 2.5g RAM. The machine itself is very > close to as fast as you are going to get on a desktop machine. Everyone > says "3 million records is nothing to SQL Server" but you couldn't prove > it > by me. Can anyone help me troubleshoot this thing and figure out what I > am > doing wrong? > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Julie > Reardon-Taylor > Sent: Friday, September 10, 2004 10:01 AM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] SQL Server Backup > > > John, > > I use Veritas 9.0 with Sql Server module as well as their Intelligent > Disaster Recovery. Both products have been very reliable. We are backing > up to tape. Easy to set-up and easy to schedule. It allows for > transaction > > log backups as well as full database backups. They will let you download > a > free evaluation copy-works for 30 days. > > > > Julie Reardon-Taylor > PRO-SOFT OF NY, INC. > 108 Franklin Street > Watertown, NY 13601 > (315) 785-0319 > www.pro-soft.net > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From ridermark at gmail.com Sat Sep 11 08:53:42 2004 From: ridermark at gmail.com (Mark Rider) Date: Sat, 11 Sep 2004 08:53:42 -0500 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: <000101c497f2$06f732c0$e8dafea9@ColbyM6805> References: <000101c497f2$06f732c0$e8dafea9@ColbyM6805> Message-ID: Have you downloaded / installed / used Books Online (BOL)? There is a LOT of information on what to do and how to do it in there, and it has taught me a LOT about what I have done wrong. It is a free download from MS at http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp You have a 64 bit processor, but are you running Windows Server 2003? That has the ability to take advantage of the processor architecture better than any other OS. The only other thing I can offer is that you give it some time and patience. I deal with about 25 million rows of data every day, and when DTS gets rolling along it seems to just sit there after the initial CSV import to the table. There is a lot going on behind the scenes, and I have learned (the hard way) that stopping the import and trying to start over will take more time than walking away for a while and coming back to the machine later. On Sat, 11 Sep 2004 07:25:17 -0400, John W. Colby wrote: > I have this mongo database. Regardless of what I do it seems, SQL Server > takes FOREVER to do anything. To the point where it appears that it is > locked up. Is there ANY way to get EM to display a status of what it is > doing so that I can even know that is doing something at all? Is there any > way to break EM when it appears to be hung. This thing is just unworkable > as it is. I will try to do something (anything it seems) and EM locks the > entire damn machine up. No incrementing status, nothing except the busy > cursor. > > I have built a smaller set of "just" 3 million records. It does the same > thing. I need to get work done on this thing, set up indexes on fields, do > data cleanup. It simply isn't going to work if everything I do locks the > machine up for a week. > > I have to believe I have some setting(s) wrong for SQL Server itself. My > hardware is an Athlon64 3.0ghz with 2.5g RAM. The machine itself is very > close to as fast as you are going to get on a desktop machine. Everyone > says "3 million records is nothing to SQL Server" but you couldn't prove it > by me. Can anyone help me troubleshoot this thing and figure out what I am > doing wrong? > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Julie > Reardon-Taylor > Sent: Friday, September 10, 2004 10:01 AM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] SQL Server Backup > > John, > > I use Veritas 9.0 with Sql Server module as well as their Intelligent > Disaster Recovery. Both products have been very reliable. We are backing > up to tape. Easy to set-up and easy to schedule. It allows for transaction > > log backups as well as full database backups. They will let you download a > free evaluation copy-works for 30 days. > > Julie Reardon-Taylor > PRO-SOFT OF NY, INC. > 108 Franklin Street > Watertown, NY 13601 > (315) 785-0319 > www.pro-soft.net > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- Mark Rider http://commonsensesecurity.info From jwcolby at colbyconsulting.com Sat Sep 11 09:27:41 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 11 Sep 2004 10:27:41 -0400 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: Message-ID: <000201c4980b$8222afb0$e8dafea9@ColbyM6805> I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Mark Rider Sent: Saturday, September 11, 2004 9:54 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] SQL Server is hopelessly slow Have you downloaded / installed / used Books Online (BOL)? There is a LOT of information on what to do and how to do it in there, and it has taught me a LOT about what I have done wrong. It is a free download from MS at http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp You have a 64 bit processor, but are you running Windows Server 2003? That has the ability to take advantage of the processor architecture better than any other OS. The only other thing I can offer is that you give it some time and patience. I deal with about 25 million rows of data every day, and when DTS gets rolling along it seems to just sit there after the initial CSV import to the table. There is a lot going on behind the scenes, and I have learned (the hard way) that stopping the import and trying to start over will take more time than walking away for a while and coming back to the machine later. On Sat, 11 Sep 2004 07:25:17 -0400, John W. Colby wrote: > I have this mongo database. Regardless of what I do it seems, SQL > Server takes FOREVER to do anything. To the point where it appears > that it is locked up. Is there ANY way to get EM to display a status > of what it is doing so that I can even know that is doing something at > all? Is there any way to break EM when it appears to be hung. This > thing is just unworkable as it is. I will try to do something > (anything it seems) and EM locks the entire damn machine up. No > incrementing status, nothing except the busy cursor. > > I have built a smaller set of "just" 3 million records. It does the > same thing. I need to get work done on this thing, set up indexes on > fields, do data cleanup. It simply isn't going to work if everything > I do locks the machine up for a week. > > I have to believe I have some setting(s) wrong for SQL Server itself. > My hardware is an Athlon64 3.0ghz with 2.5g RAM. The machine itself > is very close to as fast as you are going to get on a desktop machine. > Everyone says "3 million records is nothing to SQL Server" but you > couldn't prove it by me. Can anyone help me troubleshoot this thing > and figure out what I am doing wrong? > > John W. Colby > www.ColbyConsulting.com From mmaddison at optusnet.com.au Sat Sep 11 10:20:59 2004 From: mmaddison at optusnet.com.au (Michael Maddison) Date: Sun, 12 Sep 2004 01:20:59 +1000 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: <000201c4980b$8222afb0$e8dafea9@ColbyM6805> Message-ID: Sometimes it pays do do things 1 at a time in EM. I don't know what you've done but... try adding the field - add it as the last column, otherwise EM will drop and rebuild the whole table, save. then change the column to increment - save. Actually I've got a nagging suspicion that EM wont let you change the field once its saved... I can't test it here at home... see if that helps. cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, 12 September 2004 12:28 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Mark Rider Sent: Saturday, September 11, 2004 9:54 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] SQL Server is hopelessly slow Have you downloaded / installed / used Books Online (BOL)? There is a LOT of information on what to do and how to do it in there, and it has taught me a LOT about what I have done wrong. It is a free download from MS at http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp You have a 64 bit processor, but are you running Windows Server 2003? That has the ability to take advantage of the processor architecture better than any other OS. The only other thing I can offer is that you give it some time and patience. I deal with about 25 million rows of data every day, and when DTS gets rolling along it seems to just sit there after the initial CSV import to the table. There is a lot going on behind the scenes, and I have learned (the hard way) that stopping the import and trying to start over will take more time than walking away for a while and coming back to the machine later. On Sat, 11 Sep 2004 07:25:17 -0400, John W. Colby wrote: > I have this mongo database. Regardless of what I do it seems, SQL > Server takes FOREVER to do anything. To the point where it appears > that it is locked up. Is there ANY way to get EM to display a status > of what it is doing so that I can even know that is doing something at > all? Is there any way to break EM when it appears to be hung. This > thing is just unworkable as it is. I will try to do something > (anything it seems) and EM locks the entire damn machine up. No > incrementing status, nothing except the busy cursor. > > I have built a smaller set of "just" 3 million records. It does the > same thing. I need to get work done on this thing, set up indexes on > fields, do data cleanup. It simply isn't going to work if everything > I do locks the machine up for a week. > > I have to believe I have some setting(s) wrong for SQL Server itself. > My hardware is an Athlon64 3.0ghz with 2.5g RAM. The machine itself > is very close to as fast as you are going to get on a desktop machine. > Everyone says "3 million records is nothing to SQL Server" but you > couldn't prove it by me. Can anyone help me troubleshoot this thing > and figure out what I am doing wrong? > > John W. Colby > www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com --- Incoming mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.718 / Virus Database: 474 - Release Date: 9/07/2004 --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.718 / Virus Database: 474 - Release Date: 9/07/2004 From jwcolby at colbyconsulting.com Sat Sep 11 10:30:48 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 11 Sep 2004 11:30:48 -0400 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: Message-ID: <000301c49814$5197fea0$e8dafea9@ColbyM6805> >add it as the last column, otherwise EM will drop and rebuild the whole table Ahh, the price of ignorance. Thanks, that could be one of my problems. I just pushed all the fields down and added the field at the top (using EM table design view). John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Saturday, September 11, 2004 11:21 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow Sometimes it pays do do things 1 at a time in EM. I don't know what you've done but... try adding the field - add it as the last column, otherwise EM will drop and rebuild the whole table, save. then change the column to increment - save. Actually I've got a nagging suspicion that EM wont let you change the field once its saved... I can't test it here at home... see if that helps. cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, 12 September 2004 12:28 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com From ebarro at afsweb.com Sat Sep 11 10:47:49 2004 From: ebarro at afsweb.com (Eric Barro) Date: Sat, 11 Sep 2004 08:47:49 -0700 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: <000301c49814$5197fea0$e8dafea9@ColbyM6805> Message-ID: John, This doesn't address your SQL server issue but... You may run into a problem installing Windows 2003 server on top of XP Pro. We tried doing that with Windows 2000 Pro and the Windows 2003 compatibility checker didn't want to install on top of the pro version. We tried it on a machine that had Windows 2000 server and it didn't have a problem. I'm guessing that in order to upgrade an OS to a server product you need to already have a similar server product on the machine. SQL server is known to be such a resource hog so the more CPU and memory you can throw at it the better it will (or should) perform. When we bumped our machine's memory to 2Gb and doubled the CPU power (by adding a second CPU) and installed a Gigabit NIC it was happy. We have it running on a Windows 2003 OS. I'm pretty sure though that it's running not a 64 bit processor platform. As I mentioned earlier, you should get a performance boost by splitting the read/writes for the data files and the log files if you have SQL server writing to a different controller/drive subsystem. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 11, 2004 8:31 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow >add it as the last column, otherwise EM will drop and rebuild the whole table Ahh, the price of ignorance. Thanks, that could be one of my problems. I just pushed all the fields down and added the field at the top (using EM table design view). John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Saturday, September 11, 2004 11:21 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow Sometimes it pays do do things 1 at a time in EM. I don't know what you've done but... try adding the field - add it as the last column, otherwise EM will drop and rebuild the whole table, save. then change the column to increment - save. Actually I've got a nagging suspicion that EM wont let you change the field once its saved... I can't test it here at home... see if that helps. cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, 12 September 2004 12:28 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From ebarro at afsweb.com Sat Sep 11 10:57:02 2004 From: ebarro at afsweb.com (Eric Barro) Date: Sat, 11 Sep 2004 08:57:02 -0700 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: <000301c49814$5197fea0$e8dafea9@ColbyM6805> Message-ID: John, Here are two links you might want to check out... http://www.experts-exchange.com/Databases/Microsoft_SQL_Server/Q_20841609.html http://www.sql-server-performance.com/sql_server_configuration_settings.asp To summarize some of the points in those two links... -Make sure your tempdb and logfiles are on different drives than the data--that will make a HUGE difference. -Put the database into single-user mode if possible to do the update.... (1) Swapping is probably a serious issue here, as SQL Server will attempt to keep all of the records affected in memory as much as possible. (2) TempDB usage is also large here, as that will be used for temporary storage of the records to be affected. (3) The combined disk swapping will slow your updates down dramatically (4) If all processing can be done without TempDB and Disk Swapping, process speed can be increased 100-fold or more (I have seen this in a query on my own server box), especially if TempDB and the system swap file are on the same drive (and even more so if the database is on the same drive). --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 11, 2004 8:31 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow >add it as the last column, otherwise EM will drop and rebuild the whole table Ahh, the price of ignorance. Thanks, that could be one of my problems. I just pushed all the fields down and added the field at the top (using EM table design view). John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Saturday, September 11, 2004 11:21 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow Sometimes it pays do do things 1 at a time in EM. I don't know what you've done but... try adding the field - add it as the last column, otherwise EM will drop and rebuild the whole table, save. then change the column to increment - save. Actually I've got a nagging suspicion that EM wont let you change the field once its saved... I can't test it here at home... see if that helps. cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, 12 September 2004 12:28 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sat Sep 11 11:10:08 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 11 Sep 2004 12:10:08 -0400 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: Message-ID: <000401c49819$d2145a10$e8dafea9@ColbyM6805> At this point I'm waiting for the price (and size) of memory to drop. I bought the 1g sticks and have 2 in each machine. I am not even finding 2g sticks and if they exist they are no doubt VERY expensive. AFAIK XP can only use 2g anyway so in order to use more than that I will need to somehow upgrade the OS. I looked at a dual cpu MB but the whole package was just SO much more that I couldn't swing it. The MB is 4 times more, the chipsets suck (old AMD chipsets), the processor is about 50% more for the same clock and require registered memory which are somewhat more expensive All in all it was going to just about double the cost / processor and in the end the second proc can ONLY be used in certain OSs and even then rarely add more than 50% speed increase, sometimes not that. I may go there down the road but for now Athlon64s in the socket 754 pin is the price performance sweet spot and only a few clicks back from the fastest out there. As for the log files... I am actually running 4 sata disks which are by definition on their own controller, plus a ide drive which of course is on a different controller so all that is covered. The motherboards have a gigabit nic and I just bought an 8 port switch so that is covered as well. Short of moving to full 64 bit I am as close to fast as I'm going to get. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Saturday, September 11, 2004 11:48 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow John, This doesn't address your SQL server issue but... You may run into a problem installing Windows 2003 server on top of XP Pro. We tried doing that with Windows 2000 Pro and the Windows 2003 compatibility checker didn't want to install on top of the pro version. We tried it on a machine that had Windows 2000 server and it didn't have a problem. I'm guessing that in order to upgrade an OS to a server product you need to already have a similar server product on the machine. SQL server is known to be such a resource hog so the more CPU and memory you can throw at it the better it will (or should) perform. When we bumped our machine's memory to 2Gb and doubled the CPU power (by adding a second CPU) and installed a Gigabit NIC it was happy. We have it running on a Windows 2003 OS. I'm pretty sure though that it's running not a 64 bit processor platform. As I mentioned earlier, you should get a performance boost by splitting the read/writes for the data files and the log files if you have SQL server writing to a different controller/drive subsystem. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 11, 2004 8:31 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow >add it as the last column, otherwise EM will drop and rebuild the whole table Ahh, the price of ignorance. Thanks, that could be one of my problems. I just pushed all the fields down and added the field at the top (using EM table design view). John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Saturday, September 11, 2004 11:21 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow Sometimes it pays do do things 1 at a time in EM. I don't know what you've done but... try adding the field - add it as the last column, otherwise EM will drop and rebuild the whole table, save. then change the column to increment - save. Actually I've got a nagging suspicion that EM wont let you change the field once its saved... I can't test it here at home... see if that helps. cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, 12 September 2004 12:28 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sat Sep 11 11:17:56 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 11 Sep 2004 12:17:56 -0400 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: Message-ID: <000501c4981a$e9432580$e8dafea9@ColbyM6805> Thanks for the references. I'm not sure that swapping is something I can control. My mem is just about maxed out and the database is so huge that no matter what operation you try it won't fit in memory. I need to get some indexes on fields like zip or state so that I can break the record sets down into smaller chunks and do things on smaller pieces. Unfortunately the data is padded with spaces, no idea why. But I have to figure out how to get rid of the trailing spaces economically. I have processor time, I can set up a job to strip spaces off of a field by state and go to work, do the next one that evening etc. It's simply silly that it is taking so long that I can't get anything at all done in less than 24 hours. In the end though, operations like adding indexes are by definition going to happen on the entire record set. Nothing to do there but bite the bullet, start it and pray it will complete sometime before Christmas. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Eric Barro Sent: Saturday, September 11, 2004 11:57 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow John, Here are two links you might want to check out... http://www.experts-exchange.com/Databases/Microsoft_SQL_Server/Q_20841609.ht ml http://www.sql-server-performance.com/sql_server_configuration_settings.asp To summarize some of the points in those two links... -Make sure your tempdb and logfiles are on different drives than the data--that will make a HUGE difference. -Put the database into single-user mode if possible to do the update.... (1) Swapping is probably a serious issue here, as SQL Server will attempt to keep all of the records affected in memory as much as possible. (2) TempDB usage is also large here, as that will be used for temporary storage of the records to be affected. (3) The combined disk swapping will slow your updates down dramatically (4) If all processing can be done without TempDB and Disk Swapping, process speed can be increased 100-fold or more (I have seen this in a query on my own server box), especially if TempDB and the system swap file are on the same drive (and even more so if the database is on the same drive). --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 11, 2004 8:31 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow >add it as the last column, otherwise EM will drop and rebuild the whole table Ahh, the price of ignorance. Thanks, that could be one of my problems. I just pushed all the fields down and added the field at the top (using EM table design view). John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Saturday, September 11, 2004 11:21 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow Sometimes it pays do do things 1 at a time in EM. I don't know what you've done but... try adding the field - add it as the last column, otherwise EM will drop and rebuild the whole table, save. then change the column to increment - save. Actually I've got a nagging suspicion that EM wont let you change the field once its saved... I can't test it here at home... see if that helps. cheers Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, 12 September 2004 12:28 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jmoss111 at bellsouth.net Sat Sep 11 12:09:04 2004 From: jmoss111 at bellsouth.net (JMoss) Date: Sat, 11 Sep 2004 12:09:04 -0500 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: <000201c4980b$8222afb0$e8dafea9@ColbyM6805> Message-ID: John, Microsoft has a free 360 day trial version of WIndows XP for 64 bit architecture available: http://www.microsoft.com/windowsxp/64bit/evaluation/upgrade.mspx You might just want to add your ID field from qa rather than using the gui: ALTER TABLE [dbo].[Yourtable] ADD [ID] [int] IDENTITY (1, 1) NOT NULL (starting seed #, increment by) Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 11, 2004 9:28 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Sat Sep 11 12:55:19 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 11 Sep 2004 13:55:19 -0400 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: Message-ID: <000601c49828$81e26c30$e8dafea9@ColbyM6805> When I try to add the PK from EM I get a warning that the table was created with Ansi_Null off and the table will be recreated with ANSI_NULL on. I had logging turned on last time I tried that and it ran for 3 days with no indication whether or not it ever intended to finish. I have turned logging to simple and I suppose I will try it again and see what happens. I also backed up (zipped up actually) the db and log files and am going to copy them off to my backup machine before I start this so that I can at least get it going again on the other machine if the db never comes back. The last time I shut down the process, ad the rollback did occur, took well over 2 days to complete but it finally did. Of course with no log file this time... John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Saturday, September 11, 2004 1:09 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow John, Microsoft has a free 360 day trial version of WIndows XP for 64 bit architecture available: http://www.microsoft.com/windowsxp/64bit/evaluation/upgrade.mspx You might just want to add your ID field from qa rather than using the gui: ALTER TABLE [dbo].[Yourtable] ADD [ID] [int] IDENTITY (1, 1) NOT NULL (starting seed #, increment by) Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Saturday, September 11, 2004 9:28 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] SQL Server is hopelessly slow I am not running Server 2003 YET. I have it, but I have to get the drivers for my motherboard loaded. The CD that comes with the MB does a system check at install and prevents loading unless the OS is in a list of supported OSs and Server 2003 is not in that list. Tech support for the MB company says the drivers should work so... I ended up loading XP Pro just to get up and running. I wonder if I could do an OS upgrade to Server 2003 over the top of XP Pro. Since the drivers are loaded, perhaps I could get it installed that way. I can certainly appreciate "a lot going on" but for example I tried to add an identifier field (auto increment long) to the table. AFAICT There just isn't any way to do that before the load so I have to do it when I am done. I started it running and THREE DAYS LATER my machine is still locked up. With no feedback from EM I have no idea if it will be finished in an hour or it is only on the 3 millionth row with 160 million rows to go? A few hours left or 3 years? This is no way to run a company! I re-imported a single set of 3 million records and am about to try setting up the identifier field on that subset and time how long it takes. However my first machine is still locked up trying to roll back the previous attempt on the entire database. Now I start this on my remaining fast machine. What if it locks that up for days on end? This is simply silly. There must be a way for SQL Server to write a status to a log file or SOMETHING. I just can't believe that this superpowerful whizbang database engine won't tell me whether it is doing something or simply on lunch break. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From michael at ddisolutions.com.au Mon Sep 13 06:49:07 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Mon, 13 Sep 2004 21:49:07 +1000 Subject: [dba-SQLServer] Opinions welcome Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011B8B@ddi-01.DDI.local> I have a client who is contemplating installing an application in 28 sites over an area of about 500 sq miles. They initially consulted me to ask what was involved in doing the installs remotely... At that stage they said to me they were going to install SQL in all 28 sites. I asked for clarification thinking they surely only needed MSDE at each site otherwise the cost would have been enormous (I know these people don't have that sort of money). Anyway, I've just had a look at the db (installed @ my office) and application (you can tell its been upsized from Access, all those varchar(255) fields). They now want me to advise them on replicating the 28 db's back to their central server! I've had a look at the schema and there is no native way to split the data by site. Do I 1. Run like hell ;-) 2. Replicate back to 28 db's 3. Build a master db that can tell 1 site from another 4. Run the apps client/server Can I 1. Append a site field to the replicated data? 2. Replicate the 28 --> 28 --> 1 (master) I'm going to a meeting tomorrow and just wanted to see if I had missed any obvious options. cheers Michael M From jwcolby at colbyconsulting.com Mon Sep 13 08:29:18 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 13 Sep 2004 09:29:18 -0400 Subject: [dba-SQLServer] Opinions welcome In-Reply-To: <59A61174B1F5B54B97FD4ADDE71E7D01011B8B@ddi-01.DDI.local> Message-ID: <001401c49995$ad6b1730$e8dafea9@ColbyM6805> My vote would be to run like hell. Any client that hires you and TELLS you what they are going to do probably has a "stock room" guru telling them how to run things, or possibly even doing the stuff. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Monday, September 13, 2004 7:49 AM To: dba-SQLServer Subject: [dba-SQLServer] Opinions welcome I have a client who is contemplating installing an application in 28 sites over an area of about 500 sq miles. They initially consulted me to ask what was involved in doing the installs remotely... At that stage they said to me they were going to install SQL in all 28 sites. I asked for clarification thinking they surely only needed MSDE at each site otherwise the cost would have been enormous (I know these people don't have that sort of money). Anyway, I've just had a look at the db (installed @ my office) and application (you can tell its been upsized from Access, all those varchar(255) fields). They now want me to advise them on replicating the 28 db's back to their central server! I've had a look at the schema and there is no native way to split the data by site. Do I 1. Run like hell ;-) 2. Replicate back to 28 db's 3. Build a master db that can tell 1 site from another 4. Run the apps client/server Can I 1. Append a site field to the replicated data? 2. Replicate the 28 --> 28 --> 1 (master) I'm going to a meeting tomorrow and just wanted to see if I had missed any obvious options. cheers Michael M _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From ebarro at afsweb.com Mon Sep 13 09:10:48 2004 From: ebarro at afsweb.com (Eric Barro) Date: Mon, 13 Sep 2004 07:10:48 -0700 Subject: [dba-SQLServer] Opinions welcome In-Reply-To: <59A61174B1F5B54B97FD4ADDE71E7D01011B8B@ddi-01.DDI.local> Message-ID: Michael, M$ replication is a resource hog. You might want to consider log shipping instead. If each site has less than 20 users you might consider an Access db at each location with a VB front end talking to the central SQL server via the web. This way each site owns their data. The VB front end writes data to the local Access MDB and to the remote SQL server. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Michael Maddison Sent: Monday, September 13, 2004 4:49 AM To: dba-SQLServer Subject: [dba-SQLServer] Opinions welcome I have a client who is contemplating installing an application in 28 sites over an area of about 500 sq miles. They initially consulted me to ask what was involved in doing the installs remotely... At that stage they said to me they were going to install SQL in all 28 sites. I asked for clarification thinking they surely only needed MSDE at each site otherwise the cost would have been enormous (I know these people don't have that sort of money). Anyway, I've just had a look at the db (installed @ my office) and application (you can tell its been upsized from Access, all those varchar(255) fields). They now want me to advise them on replicating the 28 db's back to their central server! I've had a look at the schema and there is no native way to split the data by site. Do I 1. Run like hell ;-) 2. Replicate back to 28 db's 3. Build a master db that can tell 1 site from another 4. Run the apps client/server Can I 1. Append a site field to the replicated data? 2. Replicate the 28 --> 28 --> 1 (master) I'm going to a meeting tomorrow and just wanted to see if I had missed any obvious options. cheers Michael M _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From mmaddison at optusnet.com.au Mon Sep 13 10:09:15 2004 From: mmaddison at optusnet.com.au (Michael Maddison) Date: Tue, 14 Sep 2004 01:09:15 +1000 Subject: [dba-SQLServer] Opinions welcome In-Reply-To: Message-ID: Cool, log shipping, I havn't done that yet, sounds like fun ;-))) The software is shrinkwrap, so I no touchy touchy :-) It looks ok though once I got it installed, there are 4 install modes but I could only get 1 of them to work! cheers Michael M Michael, M$ replication is a resource hog. You might want to consider log shipping instead. If each site has less than 20 users you might consider an Access db at each location with a VB front end talking to the central SQL server via the web. This way each site owns their data. The VB front end writes data to the local Access MDB and to the remote SQL server. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Michael Maddison Sent: Monday, September 13, 2004 4:49 AM To: dba-SQLServer Subject: [dba-SQLServer] Opinions welcome I have a client who is contemplating installing an application in 28 sites over an area of about 500 sq miles. They initially consulted me to ask what was involved in doing the installs remotely... At that stage they said to me they were going to install SQL in all 28 sites. I asked for clarification thinking they surely only needed MSDE at each site otherwise the cost would have been enormous (I know these people don't have that sort of money). Anyway, I've just had a look at the db (installed @ my office) and application (you can tell its been upsized from Access, all those varchar(255) fields). They now want me to advise them on replicating the 28 db's back to their central server! I've had a look at the schema and there is no native way to split the data by site. Do I 1. Run like hell ;-) 2. Replicate back to 28 db's 3. Build a master db that can tell 1 site from another 4. Run the apps client/server Can I 1. Append a site field to the replicated data? 2. Replicate the 28 --> 28 --> 1 (master) I'm going to a meeting tomorrow and just wanted to see if I had missed any obvious options. cheers Michael M _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com --- Incoming mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.718 / Virus Database: 474 - Release Date: 9/07/2004 --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.718 / Virus Database: 474 - Release Date: 9/07/2004 From mmaddison at optusnet.com.au Mon Sep 13 10:09:13 2004 From: mmaddison at optusnet.com.au (Michael Maddison) Date: Tue, 14 Sep 2004 01:09:13 +1000 Subject: [dba-SQLServer] Opinions welcome In-Reply-To: <001401c49995$ad6b1730$e8dafea9@ColbyM6805> Message-ID: I think you may be right ;-))) The lady who is trying to get me involved thinks she can program. We've fixed her messes before ;-/ I'll see what goes down at this meeting, she did mention that they thought they could run the app from a single db but I can't see how... It all might fall in a heap anyhow. I've got an even worse client... I quoted 40K for a job, they turn around and ask me to requote but this time they through in a minimum performance criteria of 3 secs... Only problem is the hardware they have is not too crash hot :-( An old server (dual P? 500's, 512 mb) running SQL 2K AND IIS, the task mgr shows the cpu's running 50 -60% ALL the time!!! The proposed app uses SQL and IIS and also has to query a couple of Oracle db's on the fly... To top it all off the last leg of the network has to be done over a wireless connection, using hardware they havn't even bought yet!!! This is a multinational company!!! Do they think I'm stupid? Sometimes I think I should just go work for someone else... cheers Michael M My vote would be to run like hell. Any client that hires you and TELLS you what they are going to do probably has a "stock room" guru telling them how to run things, or possibly even doing the stuff. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Monday, September 13, 2004 7:49 AM To: dba-SQLServer Subject: [dba-SQLServer] Opinions welcome I have a client who is contemplating installing an application in 28 sites over an area of about 500 sq miles. They initially consulted me to ask what was involved in doing the installs remotely... At that stage they said to me they were going to install SQL in all 28 sites. I asked for clarification thinking they surely only needed MSDE at each site otherwise the cost would have been enormous (I know these people don't have that sort of money). Anyway, I've just had a look at the db (installed @ my office) and application (you can tell its been upsized from Access, all those varchar(255) fields). They now want me to advise them on replicating the 28 db's back to their central server! I've had a look at the schema and there is no native way to split the data by site. Do I 1. Run like hell ;-) 2. Replicate back to 28 db's 3. Build a master db that can tell 1 site from another 4. Run the apps client/server Can I 1. Append a site field to the replicated data? 2. Replicate the 28 --> 28 --> 1 (master) I'm going to a meeting tomorrow and just wanted to see if I had missed any obvious options. cheers Michael M _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com --- Incoming mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.718 / Virus Database: 474 - Release Date: 9/07/2004 --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.718 / Virus Database: 474 - Release Date: 9/07/2004 From jwcolby at colbyconsulting.com Mon Sep 13 10:12:55 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 13 Sep 2004 11:12:55 -0400 Subject: [dba-SQLServer] What determines default In-Reply-To: <000001c497ae$e3a8ca10$e8dafea9@ColbyM6805> Message-ID: <002401c499a4$28f9ead0$e8dafea9@ColbyM6805> FYI, this was caused by installing SP3 to SQL Server. I still haven't discovered how to set the defaults for this stuff in SQL Server. I assume it's out there somewhere. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Friday, September 10, 2004 11:25 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] What determines default I am trying to import one a small subset of one of my import files into a new table. The import fails "not enough storage". When I look at the table definition (it creates the table, but fails on import) the field defs are varchar 8000. The other machine was bringing them in as nvarchar 50. I have to believe that this is the problem. How do I cause the table build to use nvarchar 50 instead of varchar 8000? John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Mon Sep 13 10:18:30 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Mon, 13 Sep 2004 11:18:30 -0400 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <413C5A12.13447.A4917D6@lexacorp.com.pg> Message-ID: <002501c499a4$ed47d2d0$e8dafea9@ColbyM6805> OK, now for a 660 field table? ;-) John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Stuart McLachlan Sent: Sunday, September 05, 2004 10:38 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Quotes in delimited data On 5 Sep 2004 at 20:34, John W. Colby wrote: Can BCP be told to ignore the quotes? You need to edit the BCP format file to specify the delimiters for each field. For a four field inport where the fields are 1 - Numeric 2 - Quoted string 3 - Quoted string 4 - Numeric ( ie 111,"John","Colby",99) 1 SQLCHAR 0 0 ",\"" 1 f1 "" 2 SQLCHAR 0 0 "\",\"" 2 f2 "" 3 SQLCHAR 0 0 "\"," 3 f3 "" 4 SQLCHAR 0 0 "\"\n" 4 f4 "" -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Mon Sep 13 11:25:31 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Mon, 13 Sep 2004 09:25:31 -0700 Subject: [dba-SQLServer] Opinions welcome In-Reply-To: <59A61174B1F5B54B97FD4ADDE71E7D01011B8B@ddi-01.DDI.local> Message-ID: Hi Michael: I had a site that had three offices, a medium number of changes and additions a day. The site was set to auto-sync three times a day, morning, noon and late afternoon or when ever a new record was created...there was a master invoice record/table so there could be no duplicates. There was also a manual button, just in case. In the seven years the site has been up, there has only been two or three duplicates and those were a result of extenuating circumstances and nothing to do with the sync process. If you need any help with the deployment, it is straight-forward but it can be lengthy or the coding...it is really very simple. Also, Arthur from the group has had a good deal of background in the subject. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Michael Maddison Sent: Monday, September 13, 2004 4:49 AM To: dba-SQLServer Subject: [dba-SQLServer] Opinions welcome I have a client who is contemplating installing an application in 28 sites over an area of about 500 sq miles. They initially consulted me to ask what was involved in doing the installs remotely... At that stage they said to me they were going to install SQL in all 28 sites. I asked for clarification thinking they surely only needed MSDE at each site otherwise the cost would have been enormous (I know these people don't have that sort of money). Anyway, I've just had a look at the db (installed @ my office) and application (you can tell its been upsized from Access, all those varchar(255) fields). They now want me to advise them on replicating the 28 db's back to their central server! I've had a look at the schema and there is no native way to split the data by site. Do I 1. Run like hell ;-) 2. Replicate back to 28 db's 3. Build a master db that can tell 1 site from another 4. Run the apps client/server Can I 1. Append a site field to the replicated data? 2. Replicate the 28 --> 28 --> 1 (master) I'm going to a meeting tomorrow and just wanted to see if I had missed any obvious options. cheers Michael M _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Mon Sep 13 11:56:15 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 13 Sep 2004 09:56:15 -0700 Subject: [dba-SQLServer] Can I abort EM job? In-Reply-To: <000501c49796$adab9bd0$e8dafea9@ColbyM6805> References: <000501c49796$adab9bd0$e8dafea9@ColbyM6805> Message-ID: I don't know how to do that in either EM or QA... (that is get a status update on where it is in it's process) On Fri, 10 Sep 2004 20:31:22 -0400, John W. Colby wrote: > Francisco, > > >but it will cause additional delays as the log files have to be used to > restore the database to before you ran your job. > > If I had any indication of where in the process it was I might be tempted to > let it run. However how do I know it won't take 2 more days? Or weeks? > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco > Tapia > Sent: Friday, September 10, 2004 1:56 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Can I abort EM job? > > aborting the job now won't cause you to loose data, but it will cause > additional delays as the log files have to be used to restore the database > to before you ran your job. > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From andy at minstersystems.co.uk Mon Sep 13 14:30:15 2004 From: andy at minstersystems.co.uk (Andy Lacey) Date: Mon, 13 Sep 2004 20:30:15 +0100 Subject: [dba-SQLServer] File extensions In-Reply-To: Message-ID: <000001c499c8$1924af40$b274d0d5@minster33c3r25> Hello good people on this wet and windy Autumn evening in the UK Simple question from a simple soul. I keep a weather eye on the SQL server list but never having actually developed a SQL system my question is: if I go to look at a client's existing system how can I tell if it is SQL (which I think it is)? Would file suffixes tell me, or can a developer call a database anything he/she likes? And if the file suffixes are the answer what are the magic three letters? Going on from there, if I was offered the opportunity to take the data away to have a look at it can I just zip up a file or two, and if so what would I need in order to be able to read the data in Access when I got back home? -- Andy Lacey http://www.minstersystems.co.uk From fhtapia at gmail.com Mon Sep 13 17:31:10 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 13 Sep 2004 15:31:10 -0700 Subject: [dba-SQLServer] Linked Server Problem for Connecting to Oracle In-Reply-To: <200409102255.i8AMtQdZ018511@mail713.megamailservers.com> References: <200409102255.i8AMtQdZ018511@mail713.megamailservers.com> Message-ID: I'd run the component checker on both machines and see what's diffrent.. On Fri, 10 Sep 2004 18:55:26 -0400, PaulR wrote: > Hi, > > Please help on this, thanks in advance, > > I am having very hard time to get the connecting between SQLServer 2000 and > Oracle 8.1.7 using Linked Server, always I am getting the same error message > . > > Error: 7933: OLE DB Provider 'MSDAORA' reported an Error, > > The Error Occurred while connecting to oracle from SQLServer 2000 in WINDOWS > 2003, I have tried with MDAC > 2.8, MDAC 2.7 and restarted the Server, nothing is working. We are always > getting the Same Error > > Error 7399: OLE DB Provider 'MSDAORA' Reported an error. > OLE DB error trace [OLE/DB Provider 'MSDAORA' > IDBInitialize::Initialize returned 0x80004005: ]. > > But I tried with one of the Windows 2000 machine, its working fine, Please > help... > > Please help thanks > Paul > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From Jeff at OUTBAKTech.com Mon Sep 13 17:48:59 2004 From: Jeff at OUTBAKTech.com (Jeff Barrows) Date: Mon, 13 Sep 2004 17:48:59 -0500 Subject: [dba-SQLServer] SQL Security question Message-ID: <8DA8776D2F418E46A2A464AC6CE630509347@outbaksrv1.outbaktech.com> Can I use a VPN to connect to the Server? And do I need to connect directly to the SQLServer? Jeff Barrows MCP, MCAD, MCSD Outbak Technologies, LLC Racine, WI jeff at outbaktech.com -----Original Message----- From: Francisco Tapia [mailto:fhtapia at gmail.com] Sent: Friday, September 10, 2004 4:54 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] SQL Security question Yes. Your server should be conigured to also support TCP/IP connections and all your user needs is the IP, and his UserID and Password to the SqlServer On Fri, 10 Sep 2004 15:30:03 -0600, Mackin, Christopher wrote: > Can you just give them a SQL Login instead of using Windows Authentication? > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Jeff > Barrows > Sent: Friday, September 10, 2004 2:17 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] SQL Security question > > Is it possible to add a user to SQL Server who is connecting from a different domain? I have a potential new client who needs to connect to their SQL data remotely, but their laptop is not part of the company domain. > > TIA > > Jeff B > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Mon Sep 13 17:56:50 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 13 Sep 2004 15:56:50 -0700 Subject: [dba-SQLServer] SQL Security question In-Reply-To: <8DA8776D2F418E46A2A464AC6CE630509347@outbaksrv1.outbaktech.com> References: <8DA8776D2F418E46A2A464AC6CE630509347@outbaksrv1.outbaktech.com> Message-ID: If they are outside your network, yes you can use VPN, note that this will result in some overhead due to the VPN encryption. I'm not a VPN expert but I think it enable the network connection, then your connection string would enable the connection to your sql server On Mon, 13 Sep 2004 17:48:59 -0500, Jeff Barrows wrote: > Can I use a VPN to connect to the Server? And do I need to connect > directly to the SQLServer? > > Jeff Barrows > MCP, MCAD, MCSD > > Outbak Technologies, LLC > Racine, WI > jeff at outbaktech.com > > > -----Original Message----- > From: Francisco Tapia [mailto:fhtapia at gmail.com] > Sent: Friday, September 10, 2004 4:54 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] SQL Security question > > Yes. > > Your server should be conigured to also support TCP/IP connections and > all your user needs is the IP, and his UserID and Password to the > SqlServer > > On Fri, 10 Sep 2004 15:30:03 -0600, Mackin, Christopher > wrote: > > Can you just give them a SQL Login instead of using Windows > Authentication? > > > > > > > > -----Original Message----- > > From: dba-sqlserver-bounces at databaseadvisors.com > > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Jeff > > Barrows > > Sent: Friday, September 10, 2004 2:17 PM > > To: dba-sqlserver at databaseadvisors.com > > Subject: [dba-SQLServer] SQL Security question > > > > Is it possible to add a user to SQL Server who is connecting from a > different domain? I have a potential new client who needs to connect to > their SQL data remotely, but their laptop is not part of the company > domain. > > > > TIA > > > > Jeff B > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > -- > -Francisco > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From stuart at lexacorp.com.pg Mon Sep 13 18:01:00 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Tue, 14 Sep 2004 09:01:00 +1000 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <002501c499a4$ed47d2d0$e8dafea9@ColbyM6805> References: <413C5A12.13447.A4917D6@lexacorp.com.pg> Message-ID: <4146B34C.14928.17A5B072@lexacorp.com.pg> On 13 Sep 2004 at 11:18, John W. Colby wrote: > OK, now for a 660 field table? > 1. Get a good text editor with macro/block copy etc capablities such as Notetab Light, Crimson Editor etc. http://www.notetab.com/ http://www.crimsoneditor.com/ 2. Get a large pot of coffee....... -- Stuart From accessd at shaw.ca Mon Sep 13 23:56:38 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Mon, 13 Sep 2004 21:56:38 -0700 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <4146B34C.14928.17A5B072@lexacorp.com.pg> Message-ID: Hi Stuart: This is part of the reply that I got from 'Notetab' a while when I was doing some research. Did not get as far as 'CrimsonEditor'. Theoretically, NoteTab can open 2GB files; it depends on system resources. To work correctly it needs twice as much RAM as the file size, so 4GB RAM in the largest case , but it can use drive space to work in. I don't believe there is a line limit, but there is a 32Kb column limit in NoteTab Pro and I think it is 2GB in Std/Light. Pro works much better with large file files than Std/Light. In John's case it may not be enough. There is a couple of APIs that are suppose to extend file access to any size and I have a app that using it and that seems to be the case. Maybe this class can be parled into a search and replace routine without musch effort... (The code is not mine but I have added and changed things.) Option Explicit Public Enum W32F_Errors W32F_UNKNOWN_ERROR = 45600 W32F_FILE_ALREADY_OPEN W32F_PROBLEM_OPENING_FILE W32F_FILE_ALREADY_CLOSED W32F_Problem_seeking End Enum Private Const W32F_SOURCE = "Win32File Object" Private Const GENERIC_WRITE = &H40000000 Private Const GENERIC_READ = &H80000000 Private Const FILE_ATTRIBUTE_NORMAL = &H80 Private Const CREATE_ALWAYS = 2 Private Const OPEN_ALWAYS = 4 Private Const INVALID_HANDLE_VALUE = -1 Private Const FILE_BEGIN = 0, FILE_CURRENT = 1, FILE_END = 2 Private Const FORMAT_MESSAGE_FROM_SYSTEM = &H1000 Private Declare Function FormatMessage Lib "kernel32" _ Alias "FormatMessageA" (ByVal dwFlags As Long, _ lpSource As Long, _ ByVal dwMessageId As Long, _ ByVal dwLanguageId As Long, _ ByVal lpBuffer As String, _ ByVal nSize As Long, _ Arguments As Any) As Long Private Declare Function ReadFile Lib "kernel32" _ (ByVal hFile As Long, _ lpBuffer As Any, _ ByVal nNumberOfBytesToRead As Long, _ lpNumberOfBytesRead As Long, _ ByVal lpOverlapped As Long) As Long Private Declare Function CloseHandle Lib "kernel32" _ (ByVal hObject As Long) As Long Private Declare Function WriteFile Lib "kernel32" _ (ByVal hFile As Long, _ lpBuffer As Any, _ ByVal nNumberOfBytesToWrite As Long, _ lpNumberOfBytesWritten As Long, _ ByVal lpOverlapped As Long) As Long Private Declare Function CreateFile Lib "kernel32" _ Alias "CreateFileA" (ByVal lpFileName As String, _ ByVal dwDesiredAccess As Long, _ ByVal dwShareMode As Long, _ ByVal lpSecurityAttributes As Long, _ ByVal dwCreationDisposition As Long, _ ByVal dwFlagsAndAttributes As Long, _ ByVal hTemplateFile As Long) As Long Private Declare Function SetFilePointer Lib "kernel32" _ (ByVal hFile As Long, _ ByVal lDistanceToMove As Long, _ lpDistanceToMoveHigh As Long, _ ByVal dwMoveMethod As Long) As Long Private Declare Function FlushFileBuffers Lib "kernel32" _ (ByVal hFile As Long) As Long Private hFile As Long, sFName As String, fAutoFlush As Boolean Private FileHandle1 As Long Public Property Get FileHandle() As Long If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If FileHandle = hFile End Property Public Property Get FileName() As String If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If FileName = sFName End Property Public Property Get IsOpen() As Boolean IsOpen = hFile <> INVALID_HANDLE_VALUE End Property Public Property Get AutoFlush() As Boolean If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If AutoFlush = fAutoFlush End Property Public Property Let AutoFlush(ByVal NewVal As Boolean) If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If fAutoFlush = NewVal End Property Public Sub OpenFile(ByVal sFileName As String) If hFile <> INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_OPEN, sFName Else FileHandle1 = hFile End If hFile = CreateFile(sFileName, GENERIC_WRITE Or GENERIC_READ, 0, _ 0, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0) If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_PROBLEM_OPENING_FILE, sFileName End If sFName = sFileName End Sub Public Sub CloseFile() If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED End If CloseHandle hFile sFName = "" fAutoFlush = False hFile = INVALID_HANDLE_VALUE End Sub Public Function ReadBytes(ByVal ByteCount As Long) As Variant Dim BytesRead As Long, Bytes() As Byte If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If ReDim Bytes(0 To ByteCount - 1) As Byte ReadFile hFile, Bytes(0), ByteCount, BytesRead, 0 ReadBytes = Bytes End Function Public Sub WriteBytes(DataBytes() As Byte) Dim fSuccess As Long, BytesToWrite As Long, BytesWritten As Long If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If BytesToWrite = UBound(DataBytes) - LBound(DataBytes) + 1 fSuccess = WriteFile(hFile, DataBytes(LBound(DataBytes)), _ BytesToWrite, BytesWritten, 0) If fAutoFlush Then Flush End Sub Public Sub Flush() If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If FlushFileBuffers hFile End Sub Public Sub SeekAbsolute(ByVal HighPos As Long, ByVal LowPos As Long) If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If LowPos = SetFilePointer(hFile, LowPos, HighPos, FILE_BEGIN) End Sub Public Sub SeekRelative(ByVal Offset As Long) Dim TempLow As Long, TempErr As Long If hFile = INVALID_HANDLE_VALUE Then RaiseError W32F_FILE_ALREADY_CLOSED Else FileHandle1 = hFile End If TempLow = SetFilePointer(hFile, Offset, ByVal 0&, FILE_CURRENT) If TempLow = -1 Then TempErr = Err.LastDllError If TempErr Then RaiseError W32F_Problem_seeking, "Error " & TempErr & "." & _ vbCrLf & CStr(TempErr) End If End If End Sub Private Sub Class_Initialize() hFile = INVALID_HANDLE_VALUE End Sub Private Sub Class_Terminate() If hFile <> INVALID_HANDLE_VALUE Then CloseHandle hFile End Sub Private Sub RaiseError(ByVal ErrorCode As W32F_Errors, _ Optional sExtra) Dim Win32Err As Long, Win32Text As String Dim lbStatus As Boolean Win32Err = Err.LastDllError lbStatus = True If Win32Err Then Win32Text = vbCrLf & "Error " & Win32Err & vbCrLf & _ DecodeAPIErrors(Win32Err) End If Select Case ErrorCode Case W32F_FILE_ALREADY_OPEN Err.Raise W32F_FILE_ALREADY_OPEN, W32F_SOURCE, lbStatus = False Case W32F_PROBLEM_OPENING_FILE Err.Raise W32F_PROBLEM_OPENING_FILE, W32F_SOURCE, lbStatus = False Case W32F_FILE_ALREADY_CLOSED Err.Raise W32F_FILE_ALREADY_CLOSED, W32F_SOURCE, lbStatus = False Case W32F_Problem_seeking Err.Raise W32F_Problem_seeking, W32F_SOURCE, lbStatus = False Case Else Err.Raise W32F_UNKNOWN_ERROR, W32F_SOURCE, lbStatus = False End Select If lbStatus = False Then CloseHandle FileHandle1 End Sub Private Function DecodeAPIErrors(ByVal ErrorCode As Long) As String Dim sMessage As String, MessageLength As Long sMessage = Space$(256) MessageLength = FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, 0&, _ ErrorCode, 0&, sMessage, 256&, 0&) If MessageLength > 0 Then DecodeAPIErrors = Left(sMessage, MessageLength) Else DecodeAPIErrors = "Unknown Error." End If End Function HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Stuart McLachlan Sent: Monday, September 13, 2004 4:01 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data On 13 Sep 2004 at 11:18, John W. Colby wrote: > OK, now for a 660 field table? > 1. Get a good text editor with macro/block copy etc capablities such as Notetab Light, Crimson Editor etc. http://www.notetab.com/ http://www.crimsoneditor.com/ 2. Get a large pot of coffee....... -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Tue Sep 14 01:24:59 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Tue, 14 Sep 2004 16:24:59 +1000 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: References: <4146B34C.14928.17A5B072@lexacorp.com.pg> Message-ID: <41471B5B.5295.193C279F@lexacorp.com.pg> On 13 Sep 2004 at 21:56, Jim Lawrence (AccessD) wrote: > This is part of the reply that I got from 'Notetab' a while when I was doing > some research. Did not get as far as 'CrimsonEditor'. > > > Theoretically, NoteTab can open 2GB files; it depends on system > resources. ----8<----- > > In John's case it may not be enough. I should hope it is, we were only talking about editing the 660 lines in the BCP format file, not doing a find a replace on the data files. :-) -- Stuart From andy at minstersystems.co.uk Tue Sep 14 01:48:59 2004 From: andy at minstersystems.co.uk (Andy Lacey) Date: Tue, 14 Sep 2004 07:48:59 +0100 Subject: [dba-SQLServer] File extensions In-Reply-To: <41471B5B.5295.193C279F@lexacorp.com.pg> Message-ID: <000b01c49a26$e9c19830$b274d0d5@minster33c3r25> Sent this hours ago but hasn't appeared, hence trying again - so apologies in advance when the original turns up. Hello good people on this wet and windy Autumn evening in the UK Simple question from a simple soul. I keep a weather eye on the SQL server list but never having actually developed a SQL system my question is: if I go to look at a client's existing system how can I tell if it is SQL (which I think it is)? Would file suffixes tell me, or can a developer call a database anything he/she likes? And if the file suffixes are the answer what are the magic three letters? Going on from there, if I was offered the opportunity to take the data away to have a look at it can I just zip up a file or two, and if so what would I need in order to be able to read the data in Access when I got back home? -- Andy Lacey http://www.minstersystems.co.uk From michael at ddisolutions.com.au Tue Sep 14 05:20:38 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Tue, 14 Sep 2004 20:20:38 +1000 Subject: [dba-SQLServer] File extensions Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011B9A@ddi-01.DDI.local> Andy, I'm pretty sure with sql you can give it any extension. However the defaults are mdf for the data and ldf for the log file. As to taking the file\s home... well maybe, maybe not ;-) You will need to hook up an interface of some kind to see what what with SQL. (You probably can do it from cmd line but I'm not sure) You need Enterprise Manager(EM) or some other tool, there are free ones out there but I don't have a link. >From EM I would backup the db (may be more then 1 data file), take the backup home, restore it with EM. If the db is over 2gb you will need the full SQL server, if less then MSDE should be OK. hope that's enough to get you started? cheers Michael M Sent this hours ago but hasn't appeared, hence trying again - so apologies in advance when the original turns up. Hello good people on this wet and windy Autumn evening in the UK Simple question from a simple soul. I keep a weather eye on the SQL server list but never having actually developed a SQL system my question is: if I go to look at a client's existing system how can I tell if it is SQL (which I think it is)? Would file suffixes tell me, or can a developer call a database anything he/she likes? And if the file suffixes are the answer what are the magic three letters? Going on from there, if I was offered the opportunity to take the data away to have a look at it can I just zip up a file or two, and if so what would I need in order to be able to read the data in Access when I got back home? -- Andy Lacey http://www.minstersystems.co.uk _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From andy at minstersystems.co.uk Tue Sep 14 05:47:09 2004 From: andy at minstersystems.co.uk (Andy Lacey) Date: Tue, 14 Sep 2004 11:47:09 +0100 Subject: [dba-SQLServer] File extensions Message-ID: <20040914104707.5A0CE25E07E@smtp.nildram.co.uk> Thanks Michael. Taking it home may not be on. Whilst I have SQL server theyy, as far as I know, just have a 3rd-party app that (I think) uses SQL server as a BE. So they probabaly don't have EM or any tools. Would that be right? If someone supplies a product using a SQL Server BE I'm assuming they don't supply much more than the database and drivers. -- Andy Lacey http://www.minstersystems.co.uk --------- Original Message -------- From: dba-sqlserver at databaseadvisors.com To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions Date: 14/09/04 10:23 > > Andy, > > I'm pretty sure with sql you can give it any extension. However the > defaults are mdf for the data and ldf for the log file. > > As to taking the files home... well maybe, maybe not ;-) > You will need to hook up an interface of some kind to see what what with > SQL. (You probably can do it from cmd line but I'm not sure) > You need Enterprise Manager(EM) or some other tool, there are free ones > out there but I don't have a link. > >From EM I would backup the db (may be more then 1 data file), take the > backup home, restore it with EM. > If the db is over 2gb you will need the full SQL server, if less then > MSDE should be OK. > > hope that's enough to get you started? > > cheers > > Michael M > > Sent this hours ago but hasn't appeared, hence trying again - so > apologies in advance when the original turns up. > > Hello good people on this wet and windy Autumn evening in the UK > > Simple question from a simple soul. I keep a weather eye on the SQL > server list but never having actually developed a SQL system my question > is: if I go to look at a client's existing system how can I tell if it > is SQL (which I think it is)? Would file suffixes tell me, or can a > developer call a database anything he/she likes? And if the file > suffixes are the answer what are the magic three letters? Going on from > there, if I was offered the opportunity to take the data away to have a > look at it can I just zip up a file or two, and if so what would I need > in order to be able to read the data in Access when I got back home? > > -- Andy Lacey > http://www.minstersystems.co.uk > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > > ________________________________________________ Message sent using UebiMiau 2.7.2 From accessd at shaw.ca Tue Sep 14 07:39:00 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Tue, 14 Sep 2004 05:39:00 -0700 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <41471B5B.5295.193C279F@lexacorp.com.pg> Message-ID: Hi Stuart: Try 'NoteTab' or 'CrimsonEditor' against any file the exceed 2GB and 'NoteTab' will choke and I suspect 'CrimsonEditor' will as well. ...the old famous 2GB limit rears it's ugly head... :-( The class I just sent, I used a couple of years ago to read large files. I just tested it on a 70GB file and it had no problems. One issue that I have found, is that when reading a file and for some reason the application is impropertly closed, the file that was being read is left locked but it is undamaged....just a cautionary note. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Stuart McLachlan Sent: Monday, September 13, 2004 11:25 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data On 13 Sep 2004 at 21:56, Jim Lawrence (AccessD) wrote: > This is part of the reply that I got from 'Notetab' a while when I was doing > some research. Did not get as far as 'CrimsonEditor'. > > > Theoretically, NoteTab can open 2GB files; it depends on system > resources. ----8<----- > > In John's case it may not be enough. I should hope it is, we were only talking about editing the 660 lines in the BCP format file, not doing a find a replace on the data files. :-) -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From michael at ddisolutions.com.au Tue Sep 14 07:46:21 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Tue, 14 Sep 2004 22:46:21 +1000 Subject: [dba-SQLServer] File extensions Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011B9B@ddi-01.DDI.local> Yes and no ;-) MSDE you just get the db and engine... Full SQL Server you get the tools. Sounds like msde, if they don't know what it is... You should be able to connect to it using Access.adp? When you choose connect it should list any SQL servers/MSDE's available. cheers Michael M Thanks Michael. Taking it home may not be on. Whilst I have SQL server theyy, as far as I know, just have a 3rd-party app that (I think) uses SQL server as a BE. So they probabaly don't have EM or any tools. Would that be right? If someone supplies a product using a SQL Server BE I'm assuming they don't supply much more than the database and drivers. -- Andy Lacey http://www.minstersystems.co.uk --------- Original Message -------- From: dba-sqlserver at databaseadvisors.com To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions Date: 14/09/04 10:23 > > Andy, > > I'm pretty sure with sql you can give it any extension. However the > defaults are mdf for the data and ldf for the log file. > > As to taking the files home... well maybe, maybe not ;-) You will need > to hook up an interface of some kind to see what what with SQL. (You > probably can do it from cmd line but I'm not sure) You need Enterprise > Manager(EM) or some other tool, there are free ones out there but I > don't have a link. > >From EM I would backup the db (may be more then 1 data file), take > the backup home, restore it with EM. > If the db is over 2gb you will need the full SQL server, if less then > MSDE should be OK. > > hope that's enough to get you started? > > cheers > > Michael M > > Sent this hours ago but hasn't appeared, hence trying again - so > apologies in advance when the original turns up. > > Hello good people on this wet and windy Autumn evening in the UK > > Simple question from a simple soul. I keep a weather eye on the SQL > server list but never having actually developed a SQL system my > question > is: if I go to look at a client's existing system how can I tell if it > is SQL (which I think it is)? Would file suffixes tell me, or can a > developer call a database anything he/she likes? And if the file > suffixes are the answer what are the magic three letters? Going on > from there, if I was offered the opportunity to take the data away to > have a look at it can I just zip up a file or two, and if so what > would I need in order to be able to read the data in Access when I got back home? > > -- Andy Lacey > http://www.minstersystems.co.uk > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > > ________________________________________________ Message sent using UebiMiau 2.7.2 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Tue Sep 14 08:56:38 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 09:56:38 -0400 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <41471B5B.5295.193C279F@lexacorp.com.pg> Message-ID: <005801c49a62$ab9c5b80$e8dafea9@ColbyM6805> LOL, exactly right. I am not going to edit a 10g file! However I hope to never have to set up a 660 field BCP format file either!!! John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Stuart McLachlan Sent: Tuesday, September 14, 2004 2:25 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data On 13 Sep 2004 at 21:56, Jim Lawrence (AccessD) wrote: > This is part of the reply that I got from 'Notetab' a while when I was > doing some research. Did not get as far as 'CrimsonEditor'. > > > Theoretically, NoteTab can open 2GB files; it depends on system > resources. ----8<----- > > In John's case it may not be enough. I should hope it is, we were only talking about editing the 660 lines in the BCP format file, not doing a find a replace on the data files. :-) -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Tue Sep 14 08:54:46 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 09:54:46 -0400 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: Message-ID: <005701c49a62$663c8010$e8dafea9@ColbyM6805> >is that when reading a file and for some reason the application is improperly closed, the file that was being read is left locked but it is undamaged....just a cautionary note. This happens to me with SQL Server as well. If you have to use task manager to close EM, the database files end up locked but undamaged. Can't copy them to another drive (backup) but can use them at will. I understand that Windows has some utility that will clear those locks, which I'd dearly love to get my hands on. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Tuesday, September 14, 2004 8:39 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data Hi Stuart: Try 'NoteTab' or 'CrimsonEditor' against any file the exceed 2GB and 'NoteTab' will choke and I suspect 'CrimsonEditor' will as well. ...the old famous 2GB limit rears it's ugly head... :-( The class I just sent, I used a couple of years ago to read large files. I just tested it on a 70GB file and it had no problems. One issue that I have found, is that when reading a file and for some reason the application is impropertly closed, the file that was being read is left locked but it is undamaged....just a cautionary note. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Stuart McLachlan Sent: Monday, September 13, 2004 11:25 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data On 13 Sep 2004 at 21:56, Jim Lawrence (AccessD) wrote: > This is part of the reply that I got from 'Notetab' a while when I was doing > some research. Did not get as far as 'CrimsonEditor'. > > > Theoretically, NoteTab can open 2GB files; it depends on system > resources. ----8<----- > > In John's case it may not be enough. I should hope it is, we were only talking about editing the 660 lines in the BCP format file, not doing a find a replace on the data files. :-) -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From CMackin at Quiznos.com Tue Sep 14 08:59:12 2004 From: CMackin at Quiznos.com (Mackin, Christopher) Date: Tue, 14 Sep 2004 07:59:12 -0600 Subject: [dba-SQLServer] File extensions Message-ID: I forget what version this was added, but in Access XP you can do some of the database maintenance functions like baking up and restoring your database. You could also very easily write the code to do so, I believe Arthur has developed a GUI using SQL DMO to provide a lot of functionality found in EM and more. Additionally, you could back it up via code so there are many possibilities to Backup or Detach the database and restore it. -Chris Mackin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Michael Maddison Sent: Tuesday, September 14, 2004 6:46 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions Yes and no ;-) MSDE you just get the db and engine... Full SQL Server you get the tools. Sounds like msde, if they don't know what it is... You should be able to connect to it using Access.adp? When you choose connect it should list any SQL servers/MSDE's available. cheers Michael M Thanks Michael. Taking it home may not be on. Whilst I have SQL server theyy, as far as I know, just have a 3rd-party app that (I think) uses SQL server as a BE. So they probabaly don't have EM or any tools. Would that be right? If someone supplies a product using a SQL Server BE I'm assuming they don't supply much more than the database and drivers. -- Andy Lacey http://www.minstersystems.co.uk --------- Original Message -------- From: dba-sqlserver at databaseadvisors.com To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions Date: 14/09/04 10:23 > > Andy, > > I'm pretty sure with sql you can give it any extension. However the > defaults are mdf for the data and ldf for the log file. > > As to taking the files home... well maybe, maybe not ;-) You will need > to hook up an interface of some kind to see what what with SQL. (You > probably can do it from cmd line but I'm not sure) You need Enterprise > Manager(EM) or some other tool, there are free ones out there but I > don't have a link. > >From EM I would backup the db (may be more then 1 data file), take > the backup home, restore it with EM. > If the db is over 2gb you will need the full SQL server, if less then > MSDE should be OK. > > hope that's enough to get you started? > > cheers > > Michael M > > Sent this hours ago but hasn't appeared, hence trying again - so > apologies in advance when the original turns up. > > Hello good people on this wet and windy Autumn evening in the UK > > Simple question from a simple soul. I keep a weather eye on the SQL > server list but never having actually developed a SQL system my > question > is: if I go to look at a client's existing system how can I tell if it > is SQL (which I think it is)? Would file suffixes tell me, or can a > developer call a database anything he/she likes? And if the file > suffixes are the answer what are the magic three letters? Going on > from there, if I was offered the opportunity to take the data away to > have a look at it can I just zip up a file or two, and if so what > would I need in order to be able to read the data in Access when I got back home? > > -- Andy Lacey > http://www.minstersystems.co.uk > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > > ________________________________________________ Message sent using UebiMiau 2.7.2 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Tue Sep 14 07:49:05 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Tue, 14 Sep 2004 05:49:05 -0700 Subject: [dba-SQLServer] File extensions In-Reply-To: <000b01c49a26$e9c19830$b274d0d5@minster33c3r25> Message-ID: Hi Andy: At a station, you either check the ODBC file list. If one is directed toward a MS SQL server, you probably have a SQL server running. Then just scan the Access code to see how the FE connects. If their app is bound, through an ODBC connections, the tables and queries icons will appear quite different otherwise scan the code for the word ADO. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Andy Lacey Sent: Monday, September 13, 2004 11:49 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] File extensions Sent this hours ago but hasn't appeared, hence trying again - so apologies in advance when the original turns up. Hello good people on this wet and windy Autumn evening in the UK Simple question from a simple soul. I keep a weather eye on the SQL server list but never having actually developed a SQL system my question is: if I go to look at a client's existing system how can I tell if it is SQL (which I think it is)? Would file suffixes tell me, or can a developer call a database anything he/she likes? And if the file suffixes are the answer what are the magic three letters? Going on from there, if I was offered the opportunity to take the data away to have a look at it can I just zip up a file or two, and if so what would I need in order to be able to read the data in Access when I got back home? -- Andy Lacey http://www.minstersystems.co.uk _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Tue Sep 14 10:23:29 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Wed, 15 Sep 2004 01:23:29 +1000 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: References: <41471B5B.5295.193C279F@lexacorp.com.pg> Message-ID: <41479991.5883.1B29288B@lexacorp.com.pg> On 14 Sep 2004 at 5:39, Jim Lawrence (AccessD) wrote: > Hi Stuart: > > Try 'NoteTab' or 'CrimsonEditor' against any file the exceed 2GB and > 'NoteTab' will choke and I suspect 'CrimsonEditor' will as well. ...the old > famous 2GB limit rears it's ugly head... :-( > For reading very large files I use LFTViewer http://www.swiftgear.com/ltfviewer/features.html -- Stuart From jwcolby at colbyconsulting.com Tue Sep 14 10:47:20 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 11:47:20 -0400 Subject: [dba-SQLServer] Database Files In-Reply-To: <59A61174B1F5B54B97FD4ADDE71E7D01011B8B@ddi-01.DDI.local> Message-ID: <006b01c49a72$21dcbc40$e8dafea9@ColbyM6805> In the quest to get sufficient storage to do the nVLDB bulk mail database I split the database into 5 containers, 4 for storage and one for the log file. Understand that this is still to this point, and likely will remain just a single table. Does anyone understand the usage of database files well enough to tell me whether splitting it into multiple pieces like this causes performance degradation or enhancement? I have read that using different files for different tables can enhance performance, but what about where it is all just a single table? In the end I may need to leave it as multiple files since processing such as adding indexes can temporarily inflate the files by almost double. When I am done they shrink back down nicely but during processing they get big. BTW, as to the actual size of the database (one table, 164 million records, 660 fields, 3000+ bytes / record) - it used 4 files of ~40 gbytes after shrinking. I added in the PK (autonumber) and indexed the state code field and ended up with 2 files of 40g and 2 files of 86g after shrinking. So the indexes added a LOT of size to the database. John W. Colby www.ColbyConsulting.com From martyconnelly at shaw.ca Tue Sep 14 11:15:26 2004 From: martyconnelly at shaw.ca (MartyConnelly) Date: Tue, 14 Sep 2004 09:15:26 -0700 Subject: [dba-SQLServer] File extensions References: <20040914104707.5A0CE25E07E@smtp.nildram.co.uk> Message-ID: <4147191E.9080506@shaw.ca> Here is a freebie replacement for EM (not completely compatible missing some options) for SQL 7 and 2000 (2 different versions) comes with VB6 source code, works through SQL DMO http://www.asql.biz/DbaMgr.shtm I believe BMC used to have a free Web Version of EM. Andy Lacey wrote: >Thanks Michael. Taking it home may not be on. Whilst I have SQL server >theyy, as far as I know, just have a 3rd-party app that (I think) uses SQL >server as a BE. So they probabaly don't have EM or any tools. Would that be >right? If someone supplies a product using a SQL Server BE I'm assuming they >don't supply much more than the database and drivers. >-- >Andy Lacey >http://www.minstersystems.co.uk > > > >--------- Original Message -------- >From: dba-sqlserver at databaseadvisors.com >To: dba-sqlserver at databaseadvisors.com >Subject: RE: [dba-SQLServer] File extensions >Date: 14/09/04 10:23 > > > >>Andy, >> >>I'm pretty sure with sql you can give it any extension. However the >>defaults are mdf for the data and ldf for the log file. >> >>As to taking the files home... well maybe, maybe not ;-) >>You will need to hook up an interface of some kind to see what what with >>SQL. (You probably can do it from cmd line but I'm not sure) >>You need Enterprise Manager(EM) or some other tool, there are free ones >>out there but I don't have a link. >>>From EM I would backup the db (may be more then 1 data file), take the >>backup home, restore it with EM. >>If the db is over 2gb you will need the full SQL server, if less then >>MSDE should be OK. >> >>hope that's enough to get you started? >> >>cheers >> >>Michael M >> >>Sent this hours ago but hasn't appeared, hence trying again - so >>apologies in advance when the original turns up. >> >>Hello good people on this wet and windy Autumn evening in the UK >> >>Simple question from a simple soul. I keep a weather eye on the SQL >>server list but never having actually developed a SQL system my question >>is: if I go to look at a client's existing system how can I tell if it >>is SQL (which I think it is)? Would file suffixes tell me, or can a >>developer call a database anything he/she likes? And if the file >>suffixes are the answer what are the magic three letters? Going on from >>there, if I was offered the opportunity to take the data away to have a >>look at it can I just zip up a file or two, and if so what would I need >>in order to be able to read the data in Access when I got back home? >> >>-- Andy Lacey >>http://www.minstersystems.co.uk >> >>_______________________________________________ >>dba-SQLServer mailing list >>dba-SQLServer at databaseadvisors.com >>http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>http://www.databaseadvisors.com >> >>_______________________________________________ >>dba-SQLServer mailing list >>dba-SQLServer at databaseadvisors.com >>http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>http://www.databaseadvisors.com >> >> >> >> >> >> >> >> > >________________________________________________ >Message sent using UebiMiau 2.7.2 > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > > > > -- Marty Connelly Victoria, B.C. Canada From accessd at shaw.ca Tue Sep 14 11:40:38 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Tue, 14 Sep 2004 09:40:38 -0700 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <005701c49a62$663c8010$e8dafea9@ColbyM6805> Message-ID: At the risk of over-stepping list etiquette. Me Too! Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Tuesday, September 14, 2004 6:55 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data >is that when reading a file and for some reason the application is improperly closed, the file that was being read is left locked but it is undamaged....just a cautionary note. This happens to me with SQL Server as well. If you have to use task manager to close EM, the database files end up locked but undamaged. Can't copy them to another drive (backup) but can use them at will. I understand that Windows has some utility that will clear those locks, which I'd dearly love to get my hands on. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Tuesday, September 14, 2004 8:39 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data Hi Stuart: Try 'NoteTab' or 'CrimsonEditor' against any file the exceed 2GB and 'NoteTab' will choke and I suspect 'CrimsonEditor' will as well. ...the old famous 2GB limit rears it's ugly head... :-( The class I just sent, I used a couple of years ago to read large files. I just tested it on a 70GB file and it had no problems. One issue that I have found, is that when reading a file and for some reason the application is impropertly closed, the file that was being read is left locked but it is undamaged....just a cautionary note. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Stuart McLachlan Sent: Monday, September 13, 2004 11:25 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data On 13 Sep 2004 at 21:56, Jim Lawrence (AccessD) wrote: > This is part of the reply that I got from 'Notetab' a while when I was doing > some research. Did not get as far as 'CrimsonEditor'. > > > Theoretically, NoteTab can open 2GB files; it depends on system > resources. ----8<----- > > In John's case it may not be enough. I should hope it is, we were only talking about editing the 660 lines in the BCP format file, not doing a find a replace on the data files. :-) -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Tue Sep 14 11:45:32 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 12:45:32 -0400 Subject: [dba-SQLServer] FYI - nVLDB performance In-Reply-To: <006b01c49a72$21dcbc40$e8dafea9@ColbyM6805> Message-ID: <006c01c49a7a$4329b170$e8dafea9@ColbyM6805> Just to let you know some numbers on the database as it currently stands. First, I have two almost identical machines which I built to hold and manipulate the database. Both machines use an MSI K8N Neo motherboard with built in 4 port SATA and dual channel IDE, gbit LAN. The processor is the 754 pin Athlon64 at 3ghz. The server currently has Windows 2K and 3g RAM installed. Apparently Win2K can use up to 4g ram whereas XP is limited to 2g. Unfortunately I cannot persuade SQL Server to use more than 2g RAM so I am not sure that more memory than that is really useful in this context. The server then has (4) 250g Maxtor SATA drives and (1) 250g Maxtor IDE drive holding the data files and the log file respectively. The second machine is currently running XP Pro. Since the two new machines have gbit nics built into the motherboard I bought an 8 port gbit switch so they could talk at full speed. In general I have found that I can run operations from the second machine to the server over gbit LAN at close to full speed, i.e. the LAN is not a severe bottleneck anymore (it definitely was at 100mbit). The db is 164 million records, 660 fields, ~3K bytes / record. I have split the db into 5 files on (5) 250g hard disks. I turned on Bulk logging for the import and when the raw data was finished importing, the data files were about 40g apiece after compacting and the log file about 100 meg. Turning on bulk logging actually enabled the process to complete in a timely manner. After import I added a PK field Long autoincrement. Doing that locked up the machine for over 24 hours but finished with the new field created as expected. When finished the log file was sitting at the 200g maximum size, the 4 data files were sitting at about 80gb each, but shrunk back to about 40 - 60g. I then added an index on the state field and ended up with two data files at 40g and two at 80g. Adding the PK field and its index as well as the index on the state field apparently added about 80g of data to the 160g used for the data itself. I have to apologize but I didn't keep written logs of file sizes after each step. Each of these operations also took overnight. After each operation I used the shrink to compact the files back down. Strangely, the shrink operation took a rather long time on two of the data files (as much as an hour) but only about a minute on the other two and the log file. I have turned to WinZip for backup. Zipping any of the database or log files gives me about a 13 to 1 compression ratio thus I can zip the files and store a 43 gig file in a 3.3 gig zip. Of course it does take awhile. I have the server zip a file and the other NEO based machine zip another. The second NEO finishes about 10% behind the server (time wise) which is pretty darned good IMHO. Someday I will time the zip operation. I have run a count of records grouped by state. That operation took a mere 1 minute and clearly demonstrates the power of the index! I also built a view to build a "every hundredth record" subset of the data using a "WHERE (PK % 100) = 0" clause. This process takes about 25 minutes to create the dataset. My intent is to use that view to create a table containing that sub data set. Then use that smaller table to run various analysis that the client is asking for. I still have to run what I fear will be a very time consuming process to strip trailing spaces from various fields. A lot of work remains but at least I have it all in SQL Server and can produce results now. John W. Colby www.ColbyConsulting.com From fhtapia at gmail.com Tue Sep 14 12:08:01 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 14 Sep 2004 10:08:01 -0700 Subject: [dba-SQLServer] File extensions In-Reply-To: <4147191E.9080506@shaw.ca> References: <20040914104707.5A0CE25E07E@smtp.nildram.co.uk> <4147191E.9080506@shaw.ca> Message-ID: Additionally Andy, you can now buy Sql Server Developer for $50 from M$ and you'll be able to load databases larger than the 2gb limit. plus have more control over the environment along w/ that you get EM and QA and all the other utilities that come w/ Sql Server. On Tue, 14 Sep 2004 09:15:26 -0700, MartyConnelly wrote: > Here is a freebie replacement for EM (not completely compatible missing > some options) for SQL 7 and 2000 (2 different versions) > comes with VB6 source code, works through SQL DMO > http://www.asql.biz/DbaMgr.shtm > I believe BMC used to have a free Web Version of EM. > > > > > Andy Lacey wrote: > > >Thanks Michael. Taking it home may not be on. Whilst I have SQL server > >theyy, as far as I know, just have a 3rd-party app that (I think) uses SQL > >server as a BE. So they probabaly don't have EM or any tools. Would that be > >right? If someone supplies a product using a SQL Server BE I'm assuming they > >don't supply much more than the database and drivers. > >-- > >Andy Lacey > >http://www.minstersystems.co.uk > > > > > > > >--------- Original Message -------- > >From: dba-sqlserver at databaseadvisors.com > >To: dba-sqlserver at databaseadvisors.com > >Subject: RE: [dba-SQLServer] File extensions > >Date: 14/09/04 10:23 > > > > > > > >>Andy, > >> > >>I'm pretty sure with sql you can give it any extension. However the > >>defaults are mdf for the data and ldf for the log file. > >> > >>As to taking the files home... well maybe, maybe not ;-) > >>You will need to hook up an interface of some kind to see what what with > >>SQL. (You probably can do it from cmd line but I'm not sure) > >>You need Enterprise Manager(EM) or some other tool, there are free ones > >>out there but I don't have a link. > >>>From EM I would backup the db (may be more then 1 data file), take the > >>backup home, restore it with EM. > >>If the db is over 2gb you will need the full SQL server, if less then > >>MSDE should be OK. > >> > >>hope that's enough to get you started? > >> > >>cheers > >> > >>Michael M > >> > >>Sent this hours ago but hasn't appeared, hence trying again - so > >>apologies in advance when the original turns up. > >> > >>Hello good people on this wet and windy Autumn evening in the UK > >> > >>Simple question from a simple soul. I keep a weather eye on the SQL > >>server list but never having actually developed a SQL system my question > >>is: if I go to look at a client's existing system how can I tell if it > >>is SQL (which I think it is)? Would file suffixes tell me, or can a > >>developer call a database anything he/she likes? And if the file > >>suffixes are the answer what are the magic three letters? Going on from > >>there, if I was offered the opportunity to take the data away to have a > >>look at it can I just zip up a file or two, and if so what would I need > >>in order to be able to read the data in Access when I got back home? > >> > >>-- Andy Lacey > >>http://www.minstersystems.co.uk > >> > >>_______________________________________________ > >>dba-SQLServer mailing list > >>dba-SQLServer at databaseadvisors.com > >>http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > >>http://www.databaseadvisors.com > >> > >>_______________________________________________ > >>dba-SQLServer mailing list > >>dba-SQLServer at databaseadvisors.com > >>http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > >>http://www.databaseadvisors.com > >> > >> > >> > >> > >> > >> > >> > >> > > > >________________________________________________ > >Message sent using UebiMiau 2.7.2 > > > >_______________________________________________ > >dba-SQLServer mailing list > >dba-SQLServer at databaseadvisors.com > >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > >http://www.databaseadvisors.com > > > > > > > > > > -- > Marty Connelly > Victoria, B.C. > Canada > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From fhtapia at gmail.com Tue Sep 14 12:23:38 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 14 Sep 2004 10:23:38 -0700 Subject: [dba-SQLServer] FYI - nVLDB performance In-Reply-To: <006c01c49a7a$4329b170$e8dafea9@ColbyM6805> References: <006b01c49a72$21dcbc40$e8dafea9@ColbyM6805> <006c01c49a7a$4329b170$e8dafea9@ColbyM6805> Message-ID: John, Thanks for keeping us updated... It seems logical to me since you are running backups of your data that it may be wise to switch from FULL loggging to SIMPLE logging now that you're doing Indexes. This way you don't log every event. Additionally restriciting the size of the log to a specific size will also help speed things along, this way SQL Server is not tied up w/ useless things such as creating virtual page files. In general this is good practice, but in your case it will be very much more needed. I've posted as has Eric and I beleive others, on how to do this, if you don't have the email, let me know and I'll post again. as far as trying to use the 4gb, you may want to take a look at this... this will obviously help speed up performance, because up until now you've only been using 2gb, and yes Windows 2000 natively supports 4gb, while XP is restricted to 2gb... why? I dunno. (but that's one more reason for me to hold on to 2000 a little longer) http://www.sql-server-performance.com/awe_memory.asp On Tue, 14 Sep 2004 12:45:32 -0400, John W. Colby wrote: > Just to let you know some numbers on the database as it currently stands. > > First, I have two almost identical machines which I built to hold and > manipulate the database. Both machines use an MSI K8N Neo motherboard with > built in 4 port SATA and dual channel IDE, gbit LAN. The processor is the > 754 pin Athlon64 at 3ghz. The server currently has Windows 2K and 3g RAM > installed. Apparently Win2K can use up to 4g ram whereas XP is limited to > 2g. Unfortunately I cannot persuade SQL Server to use more than 2g RAM so I > am not sure that more memory than that is really useful in this context. > > The server then has (4) 250g Maxtor SATA drives and (1) 250g Maxtor IDE > drive holding the data files and the log file respectively. The second > machine is currently running XP Pro. Since the two new machines have gbit > nics built into the motherboard I bought an 8 port gbit switch so they could > talk at full speed. In general I have found that I can run operations from > the second machine to the server over gbit LAN at close to full speed, i.e. > the LAN is not a severe bottleneck anymore (it definitely was at 100mbit). -- -Francisco From andy at minstersystems.co.uk Tue Sep 14 12:25:00 2004 From: andy at minstersystems.co.uk (Andy Lacey) Date: Tue, 14 Sep 2004 18:25:00 +0100 Subject: [dba-SQLServer] File extensions In-Reply-To: Message-ID: <001c01c49a7f$c3627430$b274d0d5@minster33c3r25> Thanks to all for your help. -- Andy Lacey http://www.minstersystems.co.uk > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf > Of Francisco Tapia > Sent: 14 September 2004 18:08 > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] File extensions > > > Additionally Andy, you can now buy Sql Server Developer for > $50 from M$ and you'll be able to load databases larger than > the 2gb limit. plus have more control over the environment > along w/ that you get EM and QA and all the other utilities > that come w/ Sql Server. > > > On Tue, 14 Sep 2004 09:15:26 -0700, MartyConnelly > wrote: > > Here is a freebie replacement for EM (not completely compatible > > missing some options) for SQL 7 and 2000 (2 different > versions) comes > > with VB6 source code, works through SQL DMO > > http://www.asql.biz/DbaMgr.shtm I believe BMC used to have > a free Web > > Version of EM. > > > > > > > > > > Andy Lacey wrote: > > > > >Thanks Michael. Taking it home may not be on. Whilst I have SQL > > >server theyy, as far as I know, just have a 3rd-party app that (I > > >think) uses SQL server as a BE. So they probabaly don't have EM or > > >any tools. Would that be right? If someone supplies a > product using a > > >SQL Server BE I'm assuming they don't supply much more than the > > >database and drivers. > > >-- > > >Andy Lacey > > >http://www.minstersystems.co.uk > > > > > > > > > > > >--------- Original Message -------- > > >From: dba-sqlserver at databaseadvisors.com > > >To: dba-sqlserver at databaseadvisors.com > > > > > >Subject: RE: [dba-SQLServer] File extensions > > >Date: 14/09/04 10:23 > > > > > > > > > > > >>Andy, > > >> > > >>I'm pretty sure with sql you can give it any extension. > However the > > >>defaults are mdf for the data and ldf for the log file. > > >> > > >>As to taking the files home... well maybe, maybe not ;-) You will > > >>need to hook up an interface of some kind to see what > what with SQL. > > >>(You probably can do it from cmd line but I'm not sure) You need > > >>Enterprise Manager(EM) or some other tool, there are free > ones out > > >>there but I don't have a link. >From EM I would backup the db > > >>(may be more then 1 data file), take the backup home, restore it > > >>with EM. If the db is over 2gb you will need the full SQL > server, if > > >>less then MSDE should be OK. > > >> > > >>hope that's enough to get you started? > > >> > > >>cheers > > >> > > >>Michael M > > >> > > >>Sent this hours ago but hasn't appeared, hence trying > again - so > > >>apologies in advance when the original turns up. > > >> > > >>Hello good people on this wet and windy Autumn evening in the UK > > >> > > >>Simple question from a simple soul. I keep a weather eye > on the SQL > > >>server list but never having actually developed a SQL system my > > >>question > > >>is: if I go to look at a client's existing system how can > I tell if it > > >>is SQL (which I think it is)? Would file suffixes tell > me, or can a > > >>developer call a database anything he/she likes? And if the file > > >>suffixes are the answer what are the magic three letters? > Going on from > > >>there, if I was offered the opportunity to take the data > away to have a > > >>look at it can I just zip up a file or two, and if so > what would I need > > >>in order to be able to read the data in Access when I got > back home? > > >> > > >>-- Andy Lacey > > >>http://www.minstersystems.co.uk > > >> > > >>_______________________________________________ > > >>dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com > > >>http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > >>http://www.databaseadvisors.com > > >> > > >>_______________________________________________ > > >>dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com > > >>http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > >>http://www.databaseadvisors.com > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > > > > >________________________________________________ > > >Message sent using UebiMiau 2.7.2 > > > > > >_______________________________________________ > > >dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com > > >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > >http://www.databaseadvisors.com > > > > > > > > > > > > > > > > -- > > Marty Connelly > > Victoria, B.C. > > Canada > > > > > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > > -- > -Francisco > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > From accessd at shaw.ca Tue Sep 14 11:46:24 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Tue, 14 Sep 2004 09:46:24 -0700 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <41479991.5883.1B29288B@lexacorp.com.pg> Message-ID: Hi Stewart: VFTViewer does allow you to view very large files but it has two features missing that would be needed; editing and viewing non text characters/files. It is a great viewer but to get all the features I think you would have to roll-your-own or buy one. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Stuart McLachlan Sent: Tuesday, September 14, 2004 8:23 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data On 14 Sep 2004 at 5:39, Jim Lawrence (AccessD) wrote: > Hi Stuart: > > Try 'NoteTab' or 'CrimsonEditor' against any file the exceed 2GB and > 'NoteTab' will choke and I suspect 'CrimsonEditor' will as well. ...the old > famous 2GB limit rears it's ugly head... :-( > For reading very large files I use LFTViewer http://www.swiftgear.com/ltfviewer/features.html -- Stuart _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Tue Sep 14 13:11:18 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 14 Sep 2004 11:11:18 -0700 Subject: [dba-SQLServer] Database Files In-Reply-To: <006b01c49a72$21dcbc40$e8dafea9@ColbyM6805> References: <59A61174B1F5B54B97FD4ADDE71E7D01011B8B@ddi-01.DDI.local> <006b01c49a72$21dcbc40$e8dafea9@ColbyM6805> Message-ID: When the drives are configured as Raid 5, I understand that it improves read performance, which you will you've mentioned this db would be doing mostly. You never mentioned if these drives ere Raid 5 or not. On Tue, 14 Sep 2004 11:47:20 -0400, John W. Colby wrote: > In the quest to get sufficient storage to do the nVLDB bulk mail database I > split the database into 5 containers, 4 for storage and one for the log > file. Understand that this is still to this point, and likely will remain > just a single table. Does anyone understand the usage of database files > well enough to tell me whether splitting it into multiple pieces like this > causes performance degradation or enhancement? I have read that using > different files for different tables can enhance performance, but what about > where it is all just a single table? > > In the end I may need to leave it as multiple files since processing such as > adding indexes can temporarily inflate the files by almost double. When I > am done they shrink back down nicely but during processing they get big. > > BTW, as to the actual size of the database (one table, 164 million records, > 660 fields, 3000+ bytes / record) - it used 4 files of ~40 gbytes after > shrinking. I added in the PK (autonumber) and indexed the state code field > and ended up with 2 files of 40g and 2 files of 86g after shrinking. So the > indexes added a LOT of size to the database. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco Get Firefox! From jwcolby at colbyconsulting.com Tue Sep 14 14:11:30 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 15:11:30 -0400 Subject: [dba-SQLServer] Database Files In-Reply-To: Message-ID: <006e01c49a8e$a7304ad0$e8dafea9@ColbyM6805> In fact I ended up just using single drives on each SATA channel, no raid of any kind. I was focused on getting the data into the db and splitting the db into 4 files where each file could get as large as 250g was my biggest priority. Now that the db is in, I may consolidate some of the files and throw them on a Raid0 array to up the read performance. What I was really talking about was whether SQL Server took a performance hit just by splitting a table across 4 files. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Tuesday, September 14, 2004 2:11 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Database Files When the drives are configured as Raid 5, I understand that it improves read performance, which you will you've mentioned this db would be doing mostly. You never mentioned if these drives ere Raid 5 or not. On Tue, 14 Sep 2004 11:47:20 -0400, John W. Colby wrote: > In the quest to get sufficient storage to do the nVLDB bulk mail > database I split the database into 5 containers, 4 for storage and one > for the log file. Understand that this is still to this point, and > likely will remain just a single table. Does anyone understand the > usage of database files well enough to tell me whether splitting it > into multiple pieces like this causes performance degradation or > enhancement? I have read that using different files for different > tables can enhance performance, but what about where it is all just a > single table? > > In the end I may need to leave it as multiple files since processing > such as adding indexes can temporarily inflate the files by almost > double. When I am done they shrink back down nicely but during > processing they get big. > > BTW, as to the actual size of the database (one table, 164 million > records, 660 fields, 3000+ bytes / record) - it used 4 files of ~40 > gbytes after shrinking. I added in the PK (autonumber) and indexed > the state code field and ended up with 2 files of 40g and 2 files of > 86g after shrinking. So the indexes added a LOT of size to the > database. > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From mwp.reid at qub.ac.uk Tue Sep 14 14:49:02 2004 From: mwp.reid at qub.ac.uk (Martin Reid) Date: Tue, 14 Sep 2004 20:49:02 +0100 Subject: [dba-SQLServer] MS ONLINE CHAT about to start References: <006b01c49a72$21dcbc40$e8dafea9@ColbyM6805> Message-ID: <002e01c49a93$e338ff90$0100a8c0@Martin> SQL Server Chat: SQL Server Analysis Services Come join the Microsoft Analysis Services team and ask questions about data mining, cubes, dimensional models and just about everything else you can think of when it comes to OLAP and Data Mining in SQL Server. This is the time and place to get all of your questions answered about SQL Server 2000 and SQL Server 2005 OLAP and Data Mining techniques and approaches. Here's the URL: http://communities2.microsoft.com/home/chatroom.aspx?siteid=34000015 Please join us! Thanks much! From ebarro at afsweb.com Tue Sep 14 14:54:17 2004 From: ebarro at afsweb.com (Eric Barro) Date: Tue, 14 Sep 2004 12:54:17 -0700 Subject: [dba-SQLServer] FYI - nVLDB performance In-Reply-To: Message-ID: SQL server standard edition can only address up to 2Gb of memory. Our SQL server guru says that you can tweak the settings to coerce SQL server to use more than 2Gb or RAM. He says that he's seen it done but he's not very familiar with it to outline for me the steps to do it. I haven't personally had the need to tweak it to that level. He also verified that having the files on separate drives with logically partitioned drives will speed things up. --- Eric Barro Senior Systems Analyst Advanced Field Services (208) 772-7060 http://www.afsweb.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Francisco Tapia Sent: Tuesday, September 14, 2004 10:24 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] FYI - nVLDB performance John, Thanks for keeping us updated... It seems logical to me since you are running backups of your data that it may be wise to switch from FULL loggging to SIMPLE logging now that you're doing Indexes. This way you don't log every event. Additionally restriciting the size of the log to a specific size will also help speed things along, this way SQL Server is not tied up w/ useless things such as creating virtual page files. In general this is good practice, but in your case it will be very much more needed. I've posted as has Eric and I beleive others, on how to do this, if you don't have the email, let me know and I'll post again. as far as trying to use the 4gb, you may want to take a look at this... this will obviously help speed up performance, because up until now you've only been using 2gb, and yes Windows 2000 natively supports 4gb, while XP is restricted to 2gb... why? I dunno. (but that's one more reason for me to hold on to 2000 a little longer) http://www.sql-server-performance.com/awe_memory.asp On Tue, 14 Sep 2004 12:45:32 -0400, John W. Colby wrote: > Just to let you know some numbers on the database as it currently stands. > > First, I have two almost identical machines which I built to hold and > manipulate the database. Both machines use an MSI K8N Neo motherboard with > built in 4 port SATA and dual channel IDE, gbit LAN. The processor is the > 754 pin Athlon64 at 3ghz. The server currently has Windows 2K and 3g RAM > installed. Apparently Win2K can use up to 4g ram whereas XP is limited to > 2g. Unfortunately I cannot persuade SQL Server to use more than 2g RAM so I > am not sure that more memory than that is really useful in this context. > > The server then has (4) 250g Maxtor SATA drives and (1) 250g Maxtor IDE > drive holding the data files and the log file respectively. The second > machine is currently running XP Pro. Since the two new machines have gbit > nics built into the motherboard I bought an 8 port gbit switch so they could > talk at full speed. In general I have found that I can run operations from > the second machine to the server over gbit LAN at close to full speed, i.e. > the LAN is not a severe bottleneck anymore (it definitely was at 100mbit). -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com ----------------------------------------- The information contained in this e-mail message and any file and/or attachment transmitted herewith is confidential and may be legally privileged. It is intended solely for the private use of the addressee and must not be disclosed to or used by anyone other than the addressee. If you receive this transmission by error, please immediately notify the sender by reply e-mail and destroy the original transmission and its attachments. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Users and employees of the e-mail system are expressly required not to make defamatory statements and not to infringe or authorize any infringement of copyright or any other legal right by email communications. Any such communication is contrary to company policy. The company will not accept any liability in respect of such communication. From jwcolby at colbyconsulting.com Tue Sep 14 15:07:30 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 16:07:30 -0400 Subject: [dba-SQLServer] FYI - nVLDB performance In-Reply-To: Message-ID: <006f01c49a96$7a934420$e8dafea9@ColbyM6805> Francisco, In fact all this AWE crap is nothing more than the old style EMS from early windows days, paging of memory into an address space that the OS (the CPU more correctly) can see. That is one of the reasons to go full 64 bit since that stuff is no longer required. The overhead to use AWE is supposed to be quite high. I found a similar article (it might be this article) on MS but it really didn't definitively answer how to get SQL Server using more than 2g in a system with 3 g under Windows 2K. Of course I don't have enterprise edition anyway so it looks like I simply can't use more than 2G of ram. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Tuesday, September 14, 2004 1:24 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] FYI - nVLDB performance John, Thanks for keeping us updated... It seems logical to me since you are running backups of your data that it may be wise to switch from FULL loggging to SIMPLE logging now that you're doing Indexes. This way you don't log every event. Additionally restriciting the size of the log to a specific size will also help speed things along, this way SQL Server is not tied up w/ useless things such as creating virtual page files. In general this is good practice, but in your case it will be very much more needed. I've posted as has Eric and I beleive others, on how to do this, if you don't have the email, let me know and I'll post again. as far as trying to use the 4gb, you may want to take a look at this... this will obviously help speed up performance, because up until now you've only been using 2gb, and yes Windows 2000 natively supports 4gb, while XP is restricted to 2gb... why? I dunno. (but that's one more reason for me to hold on to 2000 a little longer) http://www.sql-server-performance.com/awe_memory.asp On Tue, 14 Sep 2004 12:45:32 -0400, John W. Colby wrote: > Just to let you know some numbers on the database as it currently > stands. > > First, I have two almost identical machines which I built to hold and > manipulate the database. Both machines use an MSI K8N Neo motherboard > with built in 4 port SATA and dual channel IDE, gbit LAN. The > processor is the 754 pin Athlon64 at 3ghz. The server currently has > Windows 2K and 3g RAM installed. Apparently Win2K can use up to 4g > ram whereas XP is limited to 2g. Unfortunately I cannot persuade SQL > Server to use more than 2g RAM so I am not sure that more memory than > that is really useful in this context. > > The server then has (4) 250g Maxtor SATA drives and (1) 250g Maxtor > IDE drive holding the data files and the log file respectively. The > second machine is currently running XP Pro. Since the two new > machines have gbit nics built into the motherboard I bought an 8 port > gbit switch so they could talk at full speed. In general I have found > that I can run operations from the second machine to the server over > gbit LAN at close to full speed, i.e. the LAN is not a severe > bottleneck anymore (it definitely was at 100mbit). -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From artful at rogers.com Tue Sep 14 15:15:31 2004 From: artful at rogers.com (Arthur Fuller) Date: Tue, 14 Sep 2004 16:15:31 -0400 Subject: [dba-SQLServer] Quotes in delimited data In-Reply-To: <002501c499a4$ed47d2d0$e8dafea9@ColbyM6805> Message-ID: <028301c49a97$95ed1ba0$6501a8c0@rock> Not having seen the data, but still willing to shoot my mouth off... I would be sorely tempted to break this table into several, along the lines of likelihood-of-search. Put the most-likely fields in T1 and the second-most-likely in T2, and so on, with 1:1 joins so you could optimize the searches, joining only when you have to. Second, I would seriously consider putting all those booleans into a big bit-mapped field so you could AND and OR and XOR and so on. Just my $.02.... Arthur -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Monday, September 13, 2004 11:19 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Quotes in delimited data OK, now for a 660 field table? ;-) John W. Colby www.ColbyConsulting.com From artful at rogers.com Tue Sep 14 15:36:22 2004 From: artful at rogers.com (Arthur Fuller) Date: Tue, 14 Sep 2004 16:36:22 -0400 Subject: [dba-SQLServer] File extensions In-Reply-To: <000b01c49a26$e9c19830$b274d0d5@minster33c3r25> Message-ID: <028a01c49a9a$7f647e70$6501a8c0@rock> It's dead simple to take the files home, assuming that you have a device of suitable size. 1. Perform a backup using Enterprise Manager. 2. Zip (or far better, RAR (c.f. WinRAR -- google it) the backup file. 3. Copy said file to your medium (CD, DVD, whatever). For BIG files I use my ftp site, so as to sidestep the limitations of CD, DVD, email attachment, etc. 4. Copy it to a suitable directory on your home machine. Unzip/unRAR it. Simplest location is x:\Program Files\Microsoft SQL Server\MSSQL\Backup (the default location to look for backups). 5. Run EM at home and select any database (this is first time; once the database exists at home select it). Select Tools/Restore Database. Change the name to the appropriate name. Select From Device in the opton buttons. Navigate to the backup file. Click the Options tab and if necessary edit the filenames (you might have to change drive/dir depending on similarity/difference between your office box and your home box; if so just click in the filename and edit to suit). Also on the Options tab, click the Force Restore checkbox. 6. Click OK. First time, this should create the database, using the named files and their specified locations, and restore it from the backup. Subsequent times, the database will exist already, so you just select it in step 5 and step through the prompts. This might seem a little complex upon first read, but I assure you that it's dead simple. In the last site I worked at, I did this every day before leaving work, and if I did anything at home, then I reversed the process before returning to work. What never ceases to amaze me about MS-SQL is the speed of its backups. I always did a complete (as opposed to differential) backup on a db that was about 300 Megs and it NEVER took more than a couple of minutes -- it was WAY faster than copying the actual files from one dir to another. RARing it took a little longer but compressed it magnificently, and the FTP from home took only a few minutes. Restore a BOOM, data identical to what I just left at work! I would stay away from trying to copy the actual MDF and LDF files and instead go with Backup and Restore. Arthur -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Andy Lacey Sent: Tuesday, September 14, 2004 2:49 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] File extensions Sent this hours ago but hasn't appeared, hence trying again - so apologies in advance when the original turns up. Hello good people on this wet and windy Autumn evening in the UK Simple question from a simple soul. I keep a weather eye on the SQL server list but never having actually developed a SQL system my question is: if I go to look at a client's existing system how can I tell if it is SQL (which I think it is)? Would file suffixes tell me, or can a developer call a database anything he/she likes? And if the file suffixes are the answer what are the magic three letters? Going on from there, if I was offered the opportunity to take the data away to have a look at it can I just zip up a file or two, and if so what would I need in order to be able to read the data in Access when I got back home? -- Andy Lacey http://www.minstersystems.co.uk _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Tue Sep 14 15:38:54 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 16:38:54 -0400 Subject: [dba-SQLServer] FYI - nVLDB performance In-Reply-To: Message-ID: <007201c49a9a$dd59afa0$e8dafea9@ColbyM6805> Another set of numbers. I built two queries to pull subsets. One pulls every 100th record, JUST the PK column and builds a table from the results. This takes about 45 seconds to complete. The other pulls every 100th record, all the data and builds a table from the results. This takes 34 MINUTES. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Tuesday, September 14, 2004 1:24 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] FYI - nVLDB performance John, Thanks for keeping us updated... It seems logical to me since you are running backups of your data that it may be wise to switch from FULL loggging to SIMPLE logging now that you're doing Indexes. This way you don't log every event. Additionally restriciting the size of the log to a specific size will also help speed things along, this way SQL Server is not tied up w/ useless things such as creating virtual page files. In general this is good practice, but in your case it will be very much more needed. I've posted as has Eric and I beleive others, on how to do this, if you don't have the email, let me know and I'll post again. as far as trying to use the 4gb, you may want to take a look at this... this will obviously help speed up performance, because up until now you've only been using 2gb, and yes Windows 2000 natively supports 4gb, while XP is restricted to 2gb... why? I dunno. (but that's one more reason for me to hold on to 2000 a little longer) http://www.sql-server-performance.com/awe_memory.asp On Tue, 14 Sep 2004 12:45:32 -0400, John W. Colby wrote: > Just to let you know some numbers on the database as it currently > stands. > > First, I have two almost identical machines which I built to hold and > manipulate the database. Both machines use an MSI K8N Neo motherboard > with built in 4 port SATA and dual channel IDE, gbit LAN. The > processor is the 754 pin Athlon64 at 3ghz. The server currently has > Windows 2K and 3g RAM installed. Apparently Win2K can use up to 4g > ram whereas XP is limited to 2g. Unfortunately I cannot persuade SQL > Server to use more than 2g RAM so I am not sure that more memory than > that is really useful in this context. > > The server then has (4) 250g Maxtor SATA drives and (1) 250g Maxtor > IDE drive holding the data files and the log file respectively. The > second machine is currently running XP Pro. Since the two new > machines have gbit nics built into the motherboard I bought an 8 port > gbit switch so they could talk at full speed. In general I have found > that I can run operations from the second machine to the server over > gbit LAN at close to full speed, i.e. the LAN is not a severe > bottleneck anymore (it definitely was at 100mbit). -- -Francisco _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Tue Sep 14 15:43:04 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 14 Sep 2004 13:43:04 -0700 Subject: [dba-SQLServer] FYI - nVLDB performance In-Reply-To: <006f01c49a96$7a934420$e8dafea9@ColbyM6805> References: <006f01c49a96$7a934420$e8dafea9@ColbyM6805> Message-ID: Yes you are quite right about the AWE, but the /3GB switch will enable another Gig to work for you w/o the AWE overhead... Since I suspect that you are running the developer's version of Sql Server 2000 then you should be able to use the 4gb since you mentioned earlier that your workstation IS seeing the full 4gb but SQL was only seeing the 2gb On Tue, 14 Sep 2004 16:07:30 -0400, John W. Colby wrote: > Francisco, > > In fact all this AWE crap is nothing more than the old style EMS from early > windows days, paging of memory into an address space that the OS (the CPU > more correctly) can see. That is one of the reasons to go full 64 bit since > that stuff is no longer required. The overhead to use AWE is supposed to be > quite high. > > I found a similar article (it might be this article) on MS but it really > didn't definitively answer how to get SQL Server using more than 2g in a > system with 3 g under Windows 2K. Of course I don't have enterprise edition > anyway so it looks like I simply can't use more than 2G of ram. > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco > Tapia > Sent: Tuesday, September 14, 2004 1:24 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] FYI - nVLDB performance > > John, > Thanks for keeping us updated... It seems logical to me since you are > running backups of your data that it may be wise to switch from FULL > loggging to SIMPLE logging now that you're doing Indexes. This way you > don't log every event. Additionally restriciting the size of the log to a > specific size will also help speed things along, this way SQL Server is not > tied up w/ useless things such as creating virtual page files. In general > this is good practice, but in your case it > will be very much more needed. I've posted as has Eric and I beleive > others, on how to do this, if you don't have the email, let me know and I'll > post again. > > as far as trying to use the 4gb, you may want to take a look at this... this > will obviously help speed up performance, because up until now you've only > been using 2gb, and yes Windows 2000 natively supports 4gb, while XP is > restricted to 2gb... why? I dunno. (but that's one more reason for me to > hold on to 2000 a little longer) > > http://www.sql-server-performance.com/awe_memory.asp > > On Tue, 14 Sep 2004 12:45:32 -0400, John W. Colby > wrote: > > Just to let you know some numbers on the database as it currently > > stands. > > > > First, I have two almost identical machines which I built to hold and > > manipulate the database. Both machines use an MSI K8N Neo motherboard > > with built in 4 port SATA and dual channel IDE, gbit LAN. The > > processor is the 754 pin Athlon64 at 3ghz. The server currently has > > Windows 2K and 3g RAM installed. Apparently Win2K can use up to 4g > > ram whereas XP is limited to 2g. Unfortunately I cannot persuade SQL > > Server to use more than 2g RAM so I am not sure that more memory than > > that is really useful in this context. > > > > The server then has (4) 250g Maxtor SATA drives and (1) 250g Maxtor > > IDE drive holding the data files and the log file respectively. The > > second machine is currently running XP Pro. Since the two new > > machines have gbit nics built into the motherboard I bought an 8 port > > gbit switch so they could talk at full speed. In general I have found > > that I can run operations from the second machine to the server over > > gbit LAN at close to full speed, i.e. the LAN is not a severe > > bottleneck anymore (it definitely was at 100mbit). > > -- > -Francisco > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco Get Firefox! From andy at minstersystems.co.uk Tue Sep 14 16:15:34 2004 From: andy at minstersystems.co.uk (Andy Lacey) Date: Tue, 14 Sep 2004 22:15:34 +0100 Subject: [dba-SQLServer] File extensions In-Reply-To: <028a01c49a9a$7f647e70$6501a8c0@rock> Message-ID: <004001c49a9f$f94eb8e0$b274d0d5@minster33c3r25> Thanks for this Arthur. -- Andy Lacey http://www.minstersystems.co.uk > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf > Of Arthur Fuller > Sent: 14 September 2004 21:36 > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] File extensions > > > It's dead simple to take the files home, assuming that you > have a device of suitable size. > > 1. Perform a backup using Enterprise Manager. > 2. Zip (or far better, RAR (c.f. WinRAR -- google it) the > backup file. 3. Copy said file to your medium (CD, DVD, > whatever). For BIG files I use my ftp site, so as to sidestep > the limitations of CD, DVD, email attachment, etc. 4. Copy it > to a suitable directory on your home machine. Unzip/unRAR it. > Simplest location is x:\Program Files\Microsoft SQL > Server\MSSQL\Backup (the default location to look for > backups). 5. Run EM at home and select any database (this is > first time; once the database exists at home select it). > Select Tools/Restore Database. Change the name to the > appropriate name. Select From Device in the opton buttons. > Navigate to the backup file. Click the Options tab and if > necessary edit the filenames (you might have to change > drive/dir depending on similarity/difference between your > office box and your home box; if so just click in the > filename and edit to suit). Also on the Options tab, click > the Force Restore checkbox. 6. Click OK. > > First time, this should create the database, using the named > files and their specified locations, and restore it from the > backup. Subsequent times, the database will exist already, so > you just select it in step 5 and step through the prompts. > This might seem a little complex upon first read, but I > assure you that it's dead simple. In the last site I worked > at, I did this every day before leaving work, and if I did > anything at home, then I reversed the process before > returning to work. > > What never ceases to amaze me about MS-SQL is the speed of > its backups. I always did a complete (as opposed to > differential) backup on a db that was about 300 Megs and it > NEVER took more than a couple of minutes -- it was WAY faster > than copying the actual files from one dir to another. RARing > it took a little longer but compressed it magnificently, and > the FTP from home took only a few minutes. Restore a BOOM, > data identical to what I just left at work! > > I would stay away from trying to copy the actual MDF and LDF > files and instead go with Backup and Restore. > > Arthur > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf > Of Andy Lacey > Sent: Tuesday, September 14, 2004 2:49 AM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] File extensions > > > Sent this hours ago but hasn't appeared, hence trying again > - so apologies in advance when the original turns up. > > Hello good people on this wet and windy Autumn evening in the UK > > Simple question from a simple soul. I keep a weather eye on > the SQL server list but never having actually developed a SQL > system my question > is: if I go to look at a client's existing system how can I > tell if it is SQL (which I think it is)? Would file suffixes > tell me, or can a developer call a database anything he/she > likes? And if the file suffixes are the answer what are the > magic three letters? Going on from there, if I was offered > the opportunity to take the data away to have a look at it > can I just zip up a file or two, and if so what would I need > in order to be able to read the data in Access when I got back home? > > -- Andy Lacey > http://www.minstersystems.co.uk > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > From jwcolby at colbyconsulting.com Tue Sep 14 16:43:21 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 17:43:21 -0400 Subject: [dba-SQLServer] File extensions In-Reply-To: <028a01c49a9a$7f647e70$6501a8c0@rock> Message-ID: <007301c49aa3$de47ca10$e8dafea9@ColbyM6805> Arthur, Why go through the backup step. Cant you just zip the files themselves? Then attach them at home? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Tuesday, September 14, 2004 4:36 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions It's dead simple to take the files home, assuming that you have a device of suitable size. 1. Perform a backup using Enterprise Manager. 2. Zip (or far better, RAR (c.f. WinRAR -- google it) the backup file. 3. Copy said file to your medium (CD, DVD, whatever). For BIG files I use my ftp site, so as to sidestep the limitations of CD, DVD, email attachment, etc. 4. Copy it to a suitable directory on your home machine. Unzip/unRAR it. Simplest location is x:\Program Files\Microsoft SQL Server\MSSQL\Backup (the default location to look for backups). 5. Run EM at home and select any database (this is first time; once the database exists at home select it). Select Tools/Restore Database. Change the name to the appropriate name. Select From Device in the opton buttons. Navigate to the backup file. Click the Options tab and if necessary edit the filenames (you might have to change drive/dir depending on similarity/difference between your office box and your home box; if so just click in the filename and edit to suit). Also on the Options tab, click the Force Restore checkbox. 6. Click OK. First time, this should create the database, using the named files and their specified locations, and restore it from the backup. Subsequent times, the database will exist already, so you just select it in step 5 and step through the prompts. This might seem a little complex upon first read, but I assure you that it's dead simple. In the last site I worked at, I did this every day before leaving work, and if I did anything at home, then I reversed the process before returning to work. What never ceases to amaze me about MS-SQL is the speed of its backups. I always did a complete (as opposed to differential) backup on a db that was about 300 Megs and it NEVER took more than a couple of minutes -- it was WAY faster than copying the actual files from one dir to another. RARing it took a little longer but compressed it magnificently, and the FTP from home took only a few minutes. Restore a BOOM, data identical to what I just left at work! I would stay away from trying to copy the actual MDF and LDF files and instead go with Backup and Restore. Arthur -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Andy Lacey Sent: Tuesday, September 14, 2004 2:49 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] File extensions Sent this hours ago but hasn't appeared, hence trying again - so apologies in advance when the original turns up. Hello good people on this wet and windy Autumn evening in the UK Simple question from a simple soul. I keep a weather eye on the SQL server list but never having actually developed a SQL system my question is: if I go to look at a client's existing system how can I tell if it is SQL (which I think it is)? Would file suffixes tell me, or can a developer call a database anything he/she likes? And if the file suffixes are the answer what are the magic three letters? Going on from there, if I was offered the opportunity to take the data away to have a look at it can I just zip up a file or two, and if so what would I need in order to be able to read the data in Access when I got back home? -- Andy Lacey http://www.minstersystems.co.uk _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From JFK at puj.edu.co Tue Sep 14 17:14:50 2004 From: JFK at puj.edu.co (Julian Felipe Castrillon Trejos) Date: Tue, 14 Sep 2004 17:14:50 -0500 Subject: [dba-SQLServer] triggers question Message-ID: hello eveybody i have a question i hope you can help me answer i'm working with SQL server 2000 and i'm making an update trigger in a database. the thing is, i have two tables and i want to be sure both of then have the same data, when the principal table is updated that update must be made in the secondary table but i have a problem, in the principal table there are several rows with the same value in the field to be updated so i don't really now witch particular rows was afected in order to make the update in the secondary table, here is an example field1 field2 field3 1 x 4 2 y 4 3 z 4 4 w 4 5 u 4 6 c 4 both of the tables have the same data, when i change the value 4 from the principal table in the field3 in the first row (1 x 4) i want the update trigger to change the same row in the secundary table but how can i identify that row in the secondary table if the trigger only give me the value 4 and the new value for that field and all of the rows have the same 4 in the field3 so i don't know whitch of the rows was the one updated. i want to know if there is a way to ask the trigger for the number of the row affected or somethin like that or if there is a way extract the value from the field1 witch is unique and it could identify exactly one row. thank you for your help PD:sorry about the english. --------------------------------------------- Julian Castrillon T Ingeniero de sistemas y computacion Analista, IDA Ltda Ingenieria y desarrollo aplicados --------------------------------------------- Jesus save, but only Buda made incremental backups From ridermark at gmail.com Tue Sep 14 18:13:16 2004 From: ridermark at gmail.com (Mark Rider) Date: Tue, 14 Sep 2004 18:13:16 -0500 Subject: [dba-SQLServer] DTS Issues after Windows Upgrade Message-ID: I am VERY confused, and hope someone can help. I upgraded a Win2K server to Win2K3 Enterprise. Everything went smoothly, and it did an upgrade rather than requiring a new install. I was very happy about that. I enabled PAE to take advantage of the 4 Gb RAM that I have in it. The SQL Server 2K that was running on the box has had one modification - AWE enabled - to allow it to utilize the extra memory that I added (and therefor had to upgrade the OS). I can get to everything in the SQL Server EXCEPT for the DTS Import/Export wizards. I get errors indicating that ParseDisplayName failed, and after clicking OK the error comes up that dtswiz.exe has a a problem reading the memory at "0x00f9fe48" and I have to close it. If I access the DTS through another PC everything works fine, but the time it takes to do this is a lot longer than it takes on the database itself. I am going through and (re)applying the service packs and Security Patches for SQL Server, hoping that will help. But I assume that I have done something silly with the memory settings, and am hoping that someone will know what it is. Microsoft does not offer a lot of help. I have the Database using dynamic memory from 2048 - 3072 MB Thanks for any insight! -- Mark Rider http://commonsensesecurity.info From andrew.haslett at ilc.gov.au Tue Sep 14 19:28:09 2004 From: andrew.haslett at ilc.gov.au (Haslett, Andrew) Date: Wed, 15 Sep 2004 09:58:09 +0930 Subject: [dba-SQLServer] File extensions Message-ID: <0A870603A2A816459078203FC07F4CD204C461@adl01s055.ilcorp.gov.au> Because a backup truncates the inactive portion of the log file, meaning it can then be shrunk and saves space. It also places the DB in a 'complete' or 'safe' state (by rolling forward / back transactions as required), which you can't guarantee by simply copying the db files. Backups are the recommended method. -----Original Message----- From: John W. Colby [mailto:jwcolby at colbyconsulting.com] Sent: Wednesday, 15 September 2004 7:13 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions Arthur, Why go through the backup step. Cant you just zip the files themselves? Then attach them at home? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Tuesday, September 14, 2004 4:36 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions It's dead simple to take the files home, assuming that you have a device of suitable size. 1. Perform a backup using Enterprise Manager. 2. Zip (or far better, RAR (c.f. WinRAR -- google it) the backup file. 3. Copy said file to your medium (CD, DVD, whatever). For BIG files I use my ftp site, so as to sidestep the limitations of CD, DVD, email attachment, etc. 4. Copy it to a suitable directory on your home machine. Unzip/unRAR it. Simplest location is x:\Program Files\Microsoft SQL Server\MSSQL\Backup (the default location to look for backups). 5. Run EM at home and select any database (this is first time; once the database exists at home select it). Select Tools/Restore Database. Change the name to the appropriate name. Select From Device in the opton buttons. Navigate to the backup file. Click the Options tab and if necessary edit the filenames (you might have to change drive/dir depending on similarity/difference between your office box and your home box; if so just click in the filename and edit to suit). Also on the Options tab, click the Force Restore checkbox. 6. Click OK. First time, this should create the database, using the named files and their specified locations, and restore it from the backup. Subsequent times, the database will exist already, so you just select it in step 5 and step through the prompts. This might seem a little complex upon first read, but I assure you that it's dead simple. In the last site I worked at, I did this every day before leaving work, and if I did anything at home, then I reversed the process before returning to work. What never ceases to amaze me about MS-SQL is the speed of its backups. I always did a complete (as opposed to differential) backup on a db that was about 300 Megs and it NEVER took more than a couple of minutes -- it was WAY faster than copying the actual files from one dir to another. RARing it took a little longer but compressed it magnificently, and the FTP from home took only a few minutes. Restore a BOOM, data identical to what I just left at work! I would stay away from trying to copy the actual MDF and LDF files and instead go with Backup and Restore. Arthur -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Andy Lacey Sent: Tuesday, September 14, 2004 2:49 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] File extensions Sent this hours ago but hasn't appeared, hence trying again - so apologies in advance when the original turns up. Hello good people on this wet and windy Autumn evening in the UK Simple question from a simple soul. I keep a weather eye on the SQL server list but never having actually developed a SQL system my question is: if I go to look at a client's existing system how can I tell if it is SQL (which I think it is)? Would file suffixes tell me, or can a developer call a database anything he/she likes? And if the file suffixes are the answer what are the magic three letters? Going on from there, if I was offered the opportunity to take the data away to have a look at it can I just zip up a file or two, and if so what would I need in order to be able to read the data in Access when I got back home? -- Andy Lacey http://www.minstersystems.co.uk _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com IMPORTANT - PLEASE READ ******************** This email and any files transmitted with it are confidential and may contain information protected by law from disclosure. If you have received this message in error, please notify the sender immediately and delete this email from your system. No warranty is given that this email or files, if attached to this email, are free from computer viruses or other defects. They are provided on the basis the user assumes all responsibility for loss, damage or consequence resulting directly or indirectly from their use, whether caused by the negligence of the sender or not. From jwcolby at colbyconsulting.com Tue Sep 14 19:47:19 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 14 Sep 2004 20:47:19 -0400 Subject: [dba-SQLServer] File extensions In-Reply-To: <0A870603A2A816459078203FC07F4CD204C461@adl01s055.ilcorp.gov.au> Message-ID: <007501c49abd$90c73f90$e8dafea9@ColbyM6805> OK fine, sounds good to me. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Haslett, Andrew Sent: Tuesday, September 14, 2004 8:28 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: RE: [dba-SQLServer] File extensions Because a backup truncates the inactive portion of the log file, meaning it can then be shrunk and saves space. It also places the DB in a 'complete' or 'safe' state (by rolling forward / back transactions as required), which you can't guarantee by simply copying the db files. Backups are the recommended method. -----Original Message----- From: John W. Colby [mailto:jwcolby at colbyconsulting.com] Sent: Wednesday, 15 September 2004 7:13 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions Arthur, Why go through the backup step. Cant you just zip the files themselves? Then attach them at home? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Tuesday, September 14, 2004 4:36 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] File extensions It's dead simple to take the files home, assuming that you have a device of suitable size. 1. Perform a backup using Enterprise Manager. 2. Zip (or far better, RAR (c.f. WinRAR -- google it) the backup file. 3. Copy said file to your medium (CD, DVD, whatever). For BIG files I use my ftp site, so as to sidestep the limitations of CD, DVD, email attachment, etc. 4. Copy it to a suitable directory on your home machine. Unzip/unRAR it. Simplest location is x:\Program Files\Microsoft SQL Server\MSSQL\Backup (the default location to look for backups). 5. Run EM at home and select any database (this is first time; once the database exists at home select it). Select Tools/Restore Database. Change the name to the appropriate name. Select From Device in the opton buttons. Navigate to the backup file. Click the Options tab and if necessary edit the filenames (you might have to change drive/dir depending on similarity/difference between your office box and your home box; if so just click in the filename and edit to suit). Also on the Options tab, click the Force Restore checkbox. 6. Click OK. First time, this should create the database, using the named files and their specified locations, and restore it from the backup. Subsequent times, the database will exist already, so you just select it in step 5 and step through the prompts. This might seem a little complex upon first read, but I assure you that it's dead simple. In the last site I worked at, I did this every day before leaving work, and if I did anything at home, then I reversed the process before returning to work. What never ceases to amaze me about MS-SQL is the speed of its backups. I always did a complete (as opposed to differential) backup on a db that was about 300 Megs and it NEVER took more than a couple of minutes -- it was WAY faster than copying the actual files from one dir to another. RARing it took a little longer but compressed it magnificently, and the FTP from home took only a few minutes. Restore a BOOM, data identical to what I just left at work! I would stay away from trying to copy the actual MDF and LDF files and instead go with Backup and Restore. Arthur -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Andy Lacey Sent: Tuesday, September 14, 2004 2:49 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] File extensions Sent this hours ago but hasn't appeared, hence trying again - so apologies in advance when the original turns up. Hello good people on this wet and windy Autumn evening in the UK Simple question from a simple soul. I keep a weather eye on the SQL server list but never having actually developed a SQL system my question is: if I go to look at a client's existing system how can I tell if it is SQL (which I think it is)? Would file suffixes tell me, or can a developer call a database anything he/she likes? And if the file suffixes are the answer what are the magic three letters? Going on from there, if I was offered the opportunity to take the data away to have a look at it can I just zip up a file or two, and if so what would I need in order to be able to read the data in Access when I got back home? -- Andy Lacey http://www.minstersystems.co.uk _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com IMPORTANT - PLEASE READ ******************** This email and any files transmitted with it are confidential and may contain information protected by law from disclosure. If you have received this message in error, please notify the sender immediately and delete this email from your system. No warranty is given that this email or files, if attached to this email, are free from computer viruses or other defects. They are provided on the basis the user assumes all responsibility for loss, damage or consequence resulting directly or indirectly from their use, whether caused by the negligence of the sender or not. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Tue Sep 14 22:46:11 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 14 Sep 2004 20:46:11 -0700 Subject: [dba-SQLServer] FYI - nVLDB performance In-Reply-To: <007201c49a9a$dd59afa0$e8dafea9@ColbyM6805> References: <007201c49a9a$dd59afa0$e8dafea9@ColbyM6805> Message-ID: *GROAN* John, I know I would hate to wait SOOOO long for anytype of data. The time bother's me but what really bothers me is the idea of waiting that long only to find out you wanted either another field or some alternate criteria :( please check out www.sqlservercentral.com and check their forums, I beleive you will find some very USEFUL tips and help w/ this optimizing this sort of thing... :( On Tue, 14 Sep 2004 16:38:54 -0400, John W. Colby wrote: > Another set of numbers. > > I built two queries to pull subsets. > > One pulls every 100th record, JUST the PK column and builds a table from the > results. This takes about 45 seconds to complete. > The other pulls every 100th record, all the data and builds a table from the > results. This takes 34 MINUTES. > > > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco > Tapia > Sent: Tuesday, September 14, 2004 1:24 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] FYI - nVLDB performance > > John, > Thanks for keeping us updated... It seems logical to me since you are > running backups of your data that it may be wise to switch from FULL > loggging to SIMPLE logging now that you're doing Indexes. This way you > don't log every event. Additionally restriciting the size of the log to a > specific size will also help speed things along, this way SQL Server is not > tied up w/ useless things such as creating virtual page files. In general > this is good practice, but in your case it > will be very much more needed. I've posted as has Eric and I beleive > others, on how to do this, if you don't have the email, let me know and I'll > post again. > > as far as trying to use the 4gb, you may want to take a look at this... this > will obviously help speed up performance, because up until now you've only > been using 2gb, and yes Windows 2000 natively supports 4gb, while XP is > restricted to 2gb... why? I dunno. (but that's one more reason for me to > hold on to 2000 a little longer) > > http://www.sql-server-performance.com/awe_memory.asp > > On Tue, 14 Sep 2004 12:45:32 -0400, John W. Colby > wrote: > > Just to let you know some numbers on the database as it currently > > stands. > > > > First, I have two almost identical machines which I built to hold and > > manipulate the database. Both machines use an MSI K8N Neo motherboard > > with built in 4 port SATA and dual channel IDE, gbit LAN. The > > processor is the 754 pin Athlon64 at 3ghz. The server currently has > > Windows 2K and 3g RAM installed. Apparently Win2K can use up to 4g > > ram whereas XP is limited to 2g. Unfortunately I cannot persuade SQL > > Server to use more than 2g RAM so I am not sure that more memory than > > that is really useful in this context. > > > > The server then has (4) 250g Maxtor SATA drives and (1) 250g Maxtor > > IDE drive holding the data files and the log file respectively. The > > second machine is currently running XP Pro. Since the two new > > machines have gbit nics built into the motherboard I bought an 8 port > > gbit switch so they could talk at full speed. In general I have found > > that I can run operations from the second machine to the server over > > gbit LAN at close to full speed, i.e. the LAN is not a severe > > bottleneck anymore (it definitely was at 100mbit). > > -- > -Francisco > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco From fhtapia at gmail.com Thu Sep 16 01:10:14 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Wed, 15 Sep 2004 23:10:14 -0700 Subject: [dba-SQLServer] triggers question In-Reply-To: References: Message-ID: check out books on line, when checking columns you can script it if a column changed something like: IF UPDATE ( column ) BEGIN UPDATE newtable SET column = column WHERE criteria = criteria END On Tue, 14 Sep 2004 17:14:50 -0500, Julian Felipe Castrillon Trejos wrote: > hello eveybody i have a question i hope you can help me answer > > i'm working with SQL server 2000 and i'm making an update trigger in a database. > > the thing is, i have two tables and i want to be sure both of then have the same data, when the principal table is updated that update must be made in the secondary table but i have a problem, in the principal table there are several rows with the same value in the field to be updated so i don't really now witch particular rows was afected in order to make the update in the secondary table, here is an example > > field1 field2 field3 > 1 x 4 > 2 y 4 > 3 z 4 > 4 w 4 > 5 u 4 > 6 c 4 > > both of the tables have the same data, when i change the value 4 from the principal table in the field3 in the first row (1 x 4) i want the update trigger to change the same row in the secundary table but how can i identify that row in the secondary table if the trigger only give me the value 4 and the new value for that field and all of the rows have the same 4 in the field3 so i don't know whitch of the rows was the one updated. i want to know if there is a way to ask the trigger for the number of the row affected or somethin like that or if there is a way extract the value from the field1 witch is unique and it could identify exactly one row. > > thank you for your help > > PD:sorry about the english. > > --------------------------------------------- > Julian Castrillon T > Ingeniero de sistemas y computacion > Analista, IDA Ltda > Ingenieria y desarrollo aplicados > --------------------------------------------- > Jesus save, but only Buda > made incremental backups > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco "Rediscover the web" http://www.mozilla.org/products/firefox/ ttp://spreadfirefox.com/community/?q=affiliates&id=792&t=86 From andy at minstersystems.co.uk Thu Sep 16 05:05:56 2004 From: andy at minstersystems.co.uk (Andy Lacey) Date: Thu, 16 Sep 2004 11:05:56 +0100 Subject: [dba-SQLServer] File extensions In-Reply-To: Message-ID: <000f01c49bd4$c2750f00$b274d0d5@minster33c3r25> Thanks Jim. There won't be any code to look at I'm sure. This is commercial software. I just want to establish if I can extract data from it or link to it. Thanks for the ODBC tip. -- Andy Lacey http://www.minstersystems.co.uk > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf > Of Jim Lawrence (AccessD) > Sent: 14 September 2004 13:49 > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] File extensions > > > Hi Andy: > > At a station, you either check the ODBC file list. If one is > directed toward a MS SQL server, you probably have a SQL > server running. Then just scan the Access code to see how the > FE connects. If their app is bound, through an ODBC > connections, the tables and queries icons will appear quite > different otherwise scan the code for the word ADO. > > HTH > Jim > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf > Of Andy Lacey > Sent: Monday, September 13, 2004 11:49 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] File extensions > > > Sent this hours ago but hasn't appeared, hence trying again > - so apologies in advance when the original turns up. > > Hello good people on this wet and windy Autumn evening in the UK > > Simple question from a simple soul. I keep a weather eye on > the SQL server list but never having actually developed a SQL > system my question is: if I go to look at a client's existing > system how can I tell if it is SQL (which I think it is)? > Would file suffixes tell me, or can a developer call a > database anything he/she likes? And if the file suffixes are > the answer what are the magic three letters? Going on from > there, if I was offered the opportunity to take the data away > to have a look at it can I just zip up a file or two, and if > so what would I need in order to be able to read the data in > Access when I got back home? > > -- Andy Lacey > http://www.minstersystems.co.uk > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > From andy at minstersystems.co.uk Thu Sep 16 05:05:56 2004 From: andy at minstersystems.co.uk (Andy Lacey) Date: Thu, 16 Sep 2004 11:05:56 +0100 Subject: [dba-SQLServer] File extensions In-Reply-To: <000001c499c8$1924af40$b274d0d5@minster33c3r25> Message-ID: <001001c49bd4$c29ce250$b274d0d5@minster33c3r25> Wow, sent that 36 hours ago. Good job I sent the second copy. This one obviously went via Mars. -- Andy Lacey http://www.minstersystems.co.uk > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf > Of Andy Lacey > Sent: 13 September 2004 20:30 > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] File extensions > > > Hello good people on this wet and windy Autumn evening in the UK > > Simple question from a simple soul. I keep a weather eye on > the SQL server list but never having actually developed a SQL > system my question is: if I go to look at a client's existing > system how can I tell if it is SQL (which I think it is)? > Would file suffixes tell me, or can a developer call a > database anything he/she likes? And if the file suffixes are > the answer what are the magic three letters? Going on from > there, if I was offered the opportunity to take the data away > to have a look at it can I just zip up a file or two, and if > so what would I need in order to be able to read the data in > Access when I got back home? > > -- Andy Lacey > http://www.minstersystems.co.uk > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > From MarkBoyd at McBeeAssociates.com Thu Sep 16 10:46:35 2004 From: MarkBoyd at McBeeAssociates.com (Mark Boyd) Date: Thu, 16 Sep 2004 11:46:35 -0400 Subject: [dba-SQLServer] Copying Replicated DB Message-ID: <8AD9192777941F4299B3498C427C0AFF453E@mail01.Mcbassoc.com> I've copied the table structures of a replicated DB from another SQL Server. I have no need to use replication with this new DB. When attempting to add a record to any of the tables, I receive the following error: "Invalid object name 'dbo.sysmergearticles'". How can I tell the new DB that there is no need for the merge tables? Or, how can I get rid of them all together? Any help or direction is greatly appreciated. Thanks, Mark Boyd Sr. Systems Analyst McBee Associates, Inc. markboyd at mcbeeassociates.com From John.Maxwell2 at ntl.com Thu Sep 16 11:34:35 2004 From: John.Maxwell2 at ntl.com (John Maxwell @ London City) Date: Thu, 16 Sep 2004 17:34:35 +0100 Subject: [dba-SQLServer] triggers question Message-ID: Hello Julian, new to sql server so as always happy to be corrected. However: In your trigger you can refer to the deleted table (which contains all the records which have been updated by the event which fired the trigger) and join to your second table on field1 to perform your update. Have to admit I have not used this myself but think it is the way to go. Hope this helps Regards john -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Julian Felipe Castrillon Trejos Sent: 14 September 2004 23:15 To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] triggers question hello eveybody i have a question i hope you can help me answer i'm working with SQL server 2000 and i'm making an update trigger in a database. the thing is, i have two tables and i want to be sure both of then have the same data, when the principal table is updated that update must be made in the secondary table but i have a problem, in the principal table there are several rows with the same value in the field to be updated so i don't really now witch particular rows was afected in order to make the update in the secondary table, here is an example field1 field2 field3 1 x 4 2 y 4 3 z 4 4 w 4 5 u 4 6 c 4 both of the tables have the same data, when i change the value 4 from the principal table in the field3 in the first row (1 x 4) i want the update trigger to change the same row in the secundary table but how can i identify that row in the secondary table if the trigger only give me the value 4 and the new value for that field and all of the rows have the same 4 in the field3 so i don't know whitch of the rows was the one updated. i want to know if there is a way to ask the trigger for the number of the row affected or somethin like that or if there is a way extract the value from the field1 witch is unique and it could identify exactly one row. thank you for your help PD:sorry about the english. --------------------------------------------- Julian Castrillon T Ingeniero de sistemas y computacion Analista, IDA Ltda Ingenieria y desarrollo aplicados --------------------------------------------- Jesus save, but only Buda made incremental backups _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com The contents of this email and any attachments are sent for the personal attention of the addressee(s) only and may be confidential. If you are not the intended addressee, any use, disclosure or copying of this email and any attachments is unauthorised - please notify the sender by return and delete the message. Any representations or commitments expressed in this email are subject to contract. ntl Group Limited From John.Maxwell2 at ntl.com Wed Sep 15 10:58:08 2004 From: John.Maxwell2 at ntl.com (John Maxwell @ London City) Date: Wed, 15 Sep 2004 16:58:08 +0100 Subject: [dba-SQLServer] triggers question Message-ID: Hello Julian, new to sql server so as always happy to be corrected. However: In your trigger you can refer to the deleted table (which contains all the records which have been updated by the event which fired the trigger) and join to your second table on field1 to perform your update. Have to admit I have not used this myself but think it is the way to go. Hope this helps Regards john -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Julian Felipe Castrillon Trejos Sent: 14 September 2004 23:15 To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] triggers question hello eveybody i have a question i hope you can help me answer i'm working with SQL server 2000 and i'm making an update trigger in a database. the thing is, i have two tables and i want to be sure both of then have the same data, when the principal table is updated that update must be made in the secondary table but i have a problem, in the principal table there are several rows with the same value in the field to be updated so i don't really now witch particular rows was afected in order to make the update in the secondary table, here is an example field1 field2 field3 1 x 4 2 y 4 3 z 4 4 w 4 5 u 4 6 c 4 both of the tables have the same data, when i change the value 4 from the principal table in the field3 in the first row (1 x 4) i want the update trigger to change the same row in the secundary table but how can i identify that row in the secondary table if the trigger only give me the value 4 and the new value for that field and all of the rows have the same 4 in the field3 so i don't know whitch of the rows was the one updated. i want to know if there is a way to ask the trigger for the number of the row affected or somethin like that or if there is a way extract the value from the field1 witch is unique and it could identify exactly one row. thank you for your help PD:sorry about the english. --------------------------------------------- Julian Castrillon T Ingeniero de sistemas y computacion Analista, IDA Ltda Ingenieria y desarrollo aplicados --------------------------------------------- Jesus save, but only Buda made incremental backups _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com The contents of this email and any attachments are sent for the personal attention of the addressee(s) only and may be confidential. If you are not the intended addressee, any use, disclosure or copying of this email and any attachments is unauthorised - please notify the sender by return and delete the message. Any representations or commitments expressed in this email are subject to contract. ntl Group Limited From JFK at puj.edu.co Thu Sep 16 18:57:05 2004 From: JFK at puj.edu.co (Julian Felipe Castrillon Trejos) Date: Thu, 16 Sep 2004 18:57:05 -0500 Subject: [dba-SQLServer] triggers question Message-ID: thank?s for your answer, but i have another question, what happen if i updated only the field3 the value from the fields 1 and 2 is stored in the deleted table too?? the deleted table only store the values from the updated fields or the entire row?? if it?s not how can i get the field1 value?? i hope you can help me, thank you for your quick answer. PD:sorry about the english --------------------------------------------- Julian Castrillon T Ingeniero de sistemas y computacion Analista, IDA Ltda Ingenieria y desarrollo aplicados --------------------------------------------- Jesus save, but only Buda made incremental backups ________________________________ De: dba-sqlserver-bounces at databaseadvisors.com en nombre de John Maxwell @ London City Enviado el: mi? 15/09/2004 10:58 Para: 'dba-sqlserver at databaseadvisors.com' Asunto: RE: [dba-SQLServer] triggers question Hello Julian, new to sql server so as always happy to be corrected. However: In your trigger you can refer to the deleted table (which contains all the records which have been updated by the event which fired the trigger) and join to your second table on field1 to perform your update. Have to admit I have not used this myself but think it is the way to go. Hope this helps Regards john -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Julian Felipe Castrillon Trejos Sent: 14 September 2004 23:15 To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] triggers question hello eveybody i have a question i hope you can help me answer i'm working with SQL server 2000 and i'm making an update trigger in a database. the thing is, i have two tables and i want to be sure both of then have the same data, when the principal table is updated that update must be made in the secondary table but i have a problem, in the principal table there are several rows with the same value in the field to be updated so i don't really now witch particular rows was afected in order to make the update in the secondary table, here is an example field1 field2 field3 1 x 4 2 y 4 3 z 4 4 w 4 5 u 4 6 c 4 both of the tables have the same data, when i change the value 4 from the principal table in the field3 in the first row (1 x 4) i want the update trigger to change the same row in the secundary table but how can i identify that row in the secondary table if the trigger only give me the value 4 and the new value for that field and all of the rows have the same 4 in the field3 so i don't know whitch of the rows was the one updated. i want to know if there is a way to ask the trigger for the number of the row affected or somethin like that or if there is a way extract the value from the field1 witch is unique and it could identify exactly one row. thank you for your help PD:sorry about the english. --------------------------------------------- Julian Castrillon T Ingeniero de sistemas y computacion Analista, IDA Ltda Ingenieria y desarrollo aplicados --------------------------------------------- Jesus save, but only Buda made incremental backups _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com The contents of this email and any attachments are sent for the personal attention of the addressee(s) only and may be confidential. If you are not the intended addressee, any use, disclosure or copying of this email and any attachments is unauthorised - please notify the sender by return and delete the message. Any representations or commitments expressed in this email are subject to contract. ntl Group Limited _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From John.Maxwell2 at ntl.com Thu Sep 16 19:23:03 2004 From: John.Maxwell2 at ntl.com (John Maxwell @ London City) Date: Fri, 17 Sep 2004 01:23:03 +0100 Subject: [dba-SQLServer] triggers question Message-ID: The deleted table contains the whole row. As mentioned I have not used this table myself however a quick look at books on line under "Using the inserted and deleted Tables" confirms this As always happy to be corrected as new to SQL server Regards john -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Julian Felipe Castrillon Trejos Sent: 17 September 2004 00:57 To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] triggers question thank?s for your answer, but i have another question, what happen if i updated only the field3 the value from the fields 1 and 2 is stored in the deleted table too?? the deleted table only store the values from the updated fields or the entire row?? if it?s not how can i get the field1 value?? i hope you can help me, thank you for your quick answer. PD:sorry about the english --------------------------------------------- Julian Castrillon T Ingeniero de sistemas y computacion Analista, IDA Ltda Ingenieria y desarrollo aplicados --------------------------------------------- Jesus save, but only Buda made incremental backups ________________________________ De: dba-sqlserver-bounces at databaseadvisors.com en nombre de John Maxwell @ London City Enviado el: mi? 15/09/2004 10:58 Para: 'dba-sqlserver at databaseadvisors.com' Asunto: RE: [dba-SQLServer] triggers question Hello Julian, new to sql server so as always happy to be corrected. However: In your trigger you can refer to the deleted table (which contains all the records which have been updated by the event which fired the trigger) and join to your second table on field1 to perform your update. Have to admit I have not used this myself but think it is the way to go. Hope this helps Regards john -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Julian Felipe Castrillon Trejos Sent: 14 September 2004 23:15 To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] triggers question hello eveybody i have a question i hope you can help me answer i'm working with SQL server 2000 and i'm making an update trigger in a database. the thing is, i have two tables and i want to be sure both of then have the same data, when the principal table is updated that update must be made in the secondary table but i have a problem, in the principal table there are several rows with the same value in the field to be updated so i don't really now witch particular rows was afected in order to make the update in the secondary table, here is an example field1 field2 field3 1 x 4 2 y 4 3 z 4 4 w 4 5 u 4 6 c 4 both of the tables have the same data, when i change the value 4 from the principal table in the field3 in the first row (1 x 4) i want the update trigger to change the same row in the secundary table but how can i identify that row in the secondary table if the trigger only give me the value 4 and the new value for that field and all of the rows have the same 4 in the field3 so i don't know whitch of the rows was the one updated. i want to know if there is a way to ask the trigger for the number of the row affected or somethin like that or if there is a way extract the value from the field1 witch is unique and it could identify exactly one row. thank you for your help PD:sorry about the english. --------------------------------------------- Julian Castrillon T Ingeniero de sistemas y computacion Analista, IDA Ltda Ingenieria y desarrollo aplicados --------------------------------------------- Jesus save, but only Buda made incremental backups _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com The contents of this email and any attachments are sent for the personal attention of the addressee(s) only and may be confidential. If you are not the intended addressee, any use, disclosure or copying of this email and any attachments is unauthorised - please notify the sender by return and delete the message. Any representations or commitments expressed in this email are subject to contract. ntl Group Limited _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com The contents of this email and any attachments are sent for the personal attention of the addressee(s) only and may be confidential. If you are not the intended addressee, any use, disclosure or copying of this email and any attachments is unauthorised - please notify the sender by return and delete the message. Any representations or commitments expressed in this email are subject to contract. ntl Group Limited From fhtapia at gmail.com Fri Sep 17 14:23:02 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 17 Sep 2004 12:23:02 -0700 Subject: [dba-SQLServer] SQL Server is hopelessly slow In-Reply-To: <000201c4980b$8222afb0$e8dafea9@ColbyM6805> References: <000201c4980b$8222afb0$e8dafea9@ColbyM6805> Message-ID: EM won't tell you what it's doing every step of the way, but if you load profiler you can have sql server give you some feedback on what it is doing... you will want to get more intimatly involved w/ QA instead of EM too so you can do more maintenance things smarter,. Did you ever get that /3gig switch for sql server going? On Sat, 11 Sep 2004 10:27:41 -0400, John W. Colby wrote: > I am not running Server 2003 YET. I have it, but I have to get the drivers > for my motherboard loaded. The CD that comes with the MB does a system > check at install and prevents loading unless the OS is in a list of > supported OSs and Server 2003 is not in that list. Tech support for the MB > company says the drivers should work so... I ended up loading XP Pro just > to get up and running. I wonder if I could do an OS upgrade to Server 2003 > over the top of XP Pro. Since the drivers are loaded, perhaps I could get > it installed that way. > > I can certainly appreciate "a lot going on" but for example I tried to add > an identifier field (auto increment long) to the table. AFAICT There just > isn't any way to do that before the load so I have to do it when I am done. > I started it running and THREE DAYS LATER my machine is still locked up. > With no feedback from EM I have no idea if it will be finished in an hour or > it is only on the 3 millionth row with 160 million rows to go? A few hours > left or 3 years? This is no way to run a company! > > I re-imported a single set of 3 million records and am about to try setting > up the identifier field on that subset and time how long it takes. However > my first machine is still locked up trying to roll back the previous attempt > on the entire database. Now I start this on my remaining fast machine. > What if it locks that up for days on end? This is simply silly. There must > be a way for SQL Server to write a status to a log file or SOMETHING. I > just can't believe that this superpowerful whizbang database engine won't > tell me whether it is doing something or simply on lunch break. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Mark Rider > Sent: Saturday, September 11, 2004 9:54 AM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] SQL Server is hopelessly slow > > Have you downloaded / installed / used Books Online (BOL)? There is a LOT > of information on what to do and how to do it in there, and it has taught me > a LOT about what I have done wrong. It is a free download from MS at > > http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp > > You have a 64 bit processor, but are you running Windows Server 2003? > That has the ability to take advantage of the processor architecture better > than any other OS. > > The only other thing I can offer is that you give it some time and patience. > I deal with about 25 million rows of data every day, and when DTS gets > rolling along it seems to just sit there after the initial CSV import to the > table. There is a lot going on behind the scenes, and I have learned (the > hard way) that stopping the import and trying to start over will take more > time than walking away for a while and coming back to the machine later. > > On Sat, 11 Sep 2004 07:25:17 -0400, John W. Colby > wrote: > > I have this mongo database. Regardless of what I do it seems, SQL > > Server takes FOREVER to do anything. To the point where it appears > > that it is locked up. Is there ANY way to get EM to display a status > > of what it is doing so that I can even know that is doing something at > > all? Is there any way to break EM when it appears to be hung. This > > thing is just unworkable as it is. I will try to do something > > (anything it seems) and EM locks the entire damn machine up. No > > incrementing status, nothing except the busy cursor. > > > > I have built a smaller set of "just" 3 million records. It does the > > same thing. I need to get work done on this thing, set up indexes on > > fields, do data cleanup. It simply isn't going to work if everything > > I do locks the machine up for a week. > > > > I have to believe I have some setting(s) wrong for SQL Server itself. > > My hardware is an Athlon64 3.0ghz with 2.5g RAM. The machine itself > > is very close to as fast as you are going to get on a desktop machine. > > Everyone says "3 million records is nothing to SQL Server" but you > > couldn't prove it by me. Can anyone help me troubleshoot this thing > > and figure out what I am doing wrong? > > > > John W. Colby > > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco "Rediscover the web" http://www.mozilla.org/products/firefox/ http://spreadfirefox.com/community/?q=affiliates&id=792&t=86 From jwcolby at colbyconsulting.com Sun Sep 19 00:58:28 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 19 Sep 2004 01:58:28 -0400 Subject: [dba-SQLServer] test In-Reply-To: <001901c49d23$46a919c0$e8dafea9@ColbyM6805> Message-ID: <002b01c49e0d$b27d3510$e8dafea9@ColbyM6805> Nothing today from any of our lists. Just a quiet day? John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Sun Sep 19 12:13:36 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 19 Sep 2004 13:13:36 -0400 Subject: [dba-SQLServer] test In-Reply-To: <002b01c49e0d$b27d3510$e8dafea9@ColbyM6805> Message-ID: <000901c49e6c$044d7c20$e8dafea9@ColbyM6805> Does anyone know the cost of LiteSpeed backup widget? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 1:58 AM To: 'Access Developers discussion and problem solving'; dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] test Nothing today from any of our lists. Just a quiet day? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From accessd at shaw.ca Sun Sep 19 12:53:03 2004 From: accessd at shaw.ca (Jim Lawrence (AccessD)) Date: Sun, 19 Sep 2004 10:53:03 -0700 Subject: [dba-SQLServer] test In-Reply-To: <000901c49e6c$044d7c20$e8dafea9@ColbyM6805> Message-ID: Hi John: Here is their web site: http://www.imceda.com/litespeed.asp?page=1. There is no price on their web site but I believe the cost is about $600.00. (Very pricey IMHO) You can download an evaluation copy but I am not sure whether it has been crippled or not. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 10:14 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Does anyone know the cost of LiteSpeed backup widget? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 1:58 AM To: 'Access Developers discussion and problem solving'; dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] test Nothing today from any of our lists. Just a quiet day? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 19 13:10:32 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 19 Sep 2004 14:10:32 -0400 Subject: [dba-SQLServer] LiteSpeed cost In-Reply-To: Message-ID: <000a01c49e73$f878caf0$e8dafea9@ColbyM6805> I did download the evaluation ad am testing it now. No restrictions are mentioned anywhere either on their site nor in the software as it runs. I am interested for the simple reason that the nVLDB I am dealing with is already way to big to backup in the normal manner and takes way to long as well. The web site says that huge space savings are available because they use compression and that the software also performs backups much faster, 2 to 3 times faster (or more). My database is already 260 gbytes and will grow as I add indexes to the table. I definitely need this kind of speed / size help if I am going to get the database backup happening automatically. I am currently manually zipping the files but that is not a good long term solution. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Sunday, September 19, 2004 1:53 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Hi John: Here is their web site: http://www.imceda.com/litespeed.asp?page=1. There is no price on their web site but I believe the cost is about $600.00. (Very pricey IMHO) You can download an evaluation copy but I am not sure whether it has been crippled or not. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 10:14 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Does anyone know the cost of LiteSpeed backup widget? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 1:58 AM To: 'Access Developers discussion and problem solving'; dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] test Nothing today from any of our lists. Just a quiet day? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 19 14:38:59 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 19 Sep 2004 15:38:59 -0400 Subject: [dba-SQLServer] LiteSpeed cost In-Reply-To: <000a01c49e73$f878caf0$e8dafea9@ColbyM6805> Message-ID: <000b01c49e80$534eeb60$e8dafea9@ColbyM6805> I did not time the backup but the file size for the backup is 31 gb for a backup of 260 gb which is a pretty hefty savings. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 2:11 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] LiteSpeed cost I did download the evaluation ad am testing it now. No restrictions are mentioned anywhere either on their site nor in the software as it runs. I am interested for the simple reason that the nVLDB I am dealing with is already way to big to backup in the normal manner and takes way to long as well. The web site says that huge space savings are available because they use compression and that the software also performs backups much faster, 2 to 3 times faster (or more). My database is already 260 gbytes and will grow as I add indexes to the table. I definitely need this kind of speed / size help if I am going to get the database backup happening automatically. I am currently manually zipping the files but that is not a good long term solution. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Sunday, September 19, 2004 1:53 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Hi John: Here is their web site: http://www.imceda.com/litespeed.asp?page=1. There is no price on their web site but I believe the cost is about $600.00. (Very pricey IMHO) You can download an evaluation copy but I am not sure whether it has been crippled or not. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 10:14 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Does anyone know the cost of LiteSpeed backup widget? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 1:58 AM To: 'Access Developers discussion and problem solving'; dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] test Nothing today from any of our lists. Just a quiet day? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Mon Sep 20 10:38:46 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 20 Sep 2004 08:38:46 -0700 Subject: [dba-SQLServer] LiteSpeed cost In-Reply-To: <000b01c49e80$534eeb60$e8dafea9@ColbyM6805> References: <000a01c49e73$f878caf0$e8dafea9@ColbyM6805> <000b01c49e80$534eeb60$e8dafea9@ColbyM6805> Message-ID: THAT is IMPRESSIVE!... On Sun, 19 Sep 2004 15:38:59 -0400, John W. Colby wrote: > I did not time the backup but the file size for the backup is 31 gb for a > backup of 260 gb which is a pretty hefty savings. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. > Colby > Sent: Sunday, September 19, 2004 2:11 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] LiteSpeed cost > > I did download the evaluation ad am testing it now. No restrictions are > mentioned anywhere either on their site nor in the software as it runs. I > am interested for the simple reason that the nVLDB I am dealing with is > already way to big to backup in the normal manner and takes way to long as > well. The web site says that huge space savings are available because they > use compression and that the software also performs backups much faster, 2 > to 3 times faster (or more). > > My database is already 260 gbytes and will grow as I add indexes to the > table. I definitely need this kind of speed / size help if I am going to > get the database backup happening automatically. I am currently manually > zipping the files but that is not a good long term solution. > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim > Lawrence (AccessD) > Sent: Sunday, September 19, 2004 1:53 PM > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] test > > Hi John: > > Here is their web site: http://www.imceda.com/litespeed.asp?page=1. There is > no price on their web site but I believe the cost is about $600.00. (Very > pricey IMHO) You can download an evaluation copy but I am not sure whether > it has been crippled or not. > > HTH > Jim > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. > Colby > Sent: Sunday, September 19, 2004 10:14 AM > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] test > > Does anyone know the cost of LiteSpeed backup widget? > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. > Colby > Sent: Sunday, September 19, 2004 1:58 AM > To: 'Access Developers discussion and problem solving'; > dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] test > > Nothing today from any of our lists. Just a quiet day? > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco "Rediscover the web" http://spreadfirefox.com/community/?q=affiliates&id=792&t=86 http://ft316db.VOTEorNOT.org From jwcolby at colbyconsulting.com Tue Sep 21 13:35:52 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 21 Sep 2004 14:35:52 -0400 Subject: [dba-SQLServer] Count of records with zip In-Reply-To: <000b01c49e80$534eeb60$e8dafea9@ColbyM6805> Message-ID: <000d01c4a009$d3d7c6a0$e8dafea9@ColbyM6805> I need what appears to me to be a crosstab, a COUNT of all records in a set of zip codes. In other words, I have a table of zips (about 100). I need a count of how many records in my nVLDB is in each zip code. Can anyone point me in the right direction in SQL Server? John W. Colby www.ColbyConsulting.com From stuart at lexacorp.com.pg Tue Sep 21 17:11:53 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Wed, 22 Sep 2004 08:11:53 +1000 Subject: [dba-SQLServer] Count of records with zip In-Reply-To: <000d01c4a009$d3d7c6a0$e8dafea9@ColbyM6805> References: <000b01c49e80$534eeb60$e8dafea9@ColbyM6805> Message-ID: <415133C9.1410.3C7DB4D@lexacorp.com.pg> On 21 Sep 2004 at 14:35, John W. Colby wrote: > I need what appears to me to be a crosstab, a COUNT of all records in a set > of zip codes. In other words, I have a table of zips (about 100). I need a > count of how many records in my nVLDB is in each zip code. Can anyone point > me in the right direction in SQL Server? > Unless I'm misunderstanding you, that's not a crosstab. Off the top of my head, this would do it. Select Distinct myTable.Zip, Count(*), from myTable Inner Join myZipTable on myTable.Zip = myZipTable.Zip Order by myTable.Zip I -- Stuart From jwcolby at colbyconsulting.com Tue Sep 21 19:33:45 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 21 Sep 2004 20:33:45 -0400 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <000b01c49e80$534eeb60$e8dafea9@ColbyM6805> Message-ID: <001601c4a03b$d4e8aeb0$e8dafea9@ColbyM6805> I have a situation where I need to have an iif construct that returns the contents of one field if it exists, else return the other field. How do you do something like that in SQL Server? Do I use a stored procedure? I need this to run on potentially large datasets in something approaching real time. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 3:39 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] LiteSpeed cost I did not time the backup but the file size for the backup is 31 gb for a backup of 260 gb which is a pretty hefty savings. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 2:11 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] LiteSpeed cost I did download the evaluation ad am testing it now. No restrictions are mentioned anywhere either on their site nor in the software as it runs. I am interested for the simple reason that the nVLDB I am dealing with is already way to big to backup in the normal manner and takes way to long as well. The web site says that huge space savings are available because they use compression and that the software also performs backups much faster, 2 to 3 times faster (or more). My database is already 260 gbytes and will grow as I add indexes to the table. I definitely need this kind of speed / size help if I am going to get the database backup happening automatically. I am currently manually zipping the files but that is not a good long term solution. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Sunday, September 19, 2004 1:53 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Hi John: Here is their web site: http://www.imceda.com/litespeed.asp?page=1. There is no price on their web site but I believe the cost is about $600.00. (Very pricey IMHO) You can download an evaluation copy but I am not sure whether it has been crippled or not. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 10:14 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Does anyone know the cost of LiteSpeed backup widget? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 1:58 AM To: 'Access Developers discussion and problem solving'; dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] test Nothing today from any of our lists. Just a quiet day? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From kens.programming at verizon.net Tue Sep 21 19:40:18 2004 From: kens.programming at verizon.net (Ken Stoker) Date: Tue, 21 Sep 2004 17:40:18 -0700 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <001601c4a03b$d4e8aeb0$e8dafea9@ColbyM6805> Message-ID: <20040922004007.IRAT22385.out006.verizon.net@enterprise> Use CASE statements. If you lookup CASE in BOL, it will tell you how to work with them. Ken -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Tuesday, September 21, 2004 5:34 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Iif in SQL Server I have a situation where I need to have an iif construct that returns the contents of one field if it exists, else return the other field. How do you do something like that in SQL Server? Do I use a stored procedure? I need this to run on potentially large datasets in something approaching real time. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 3:39 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] LiteSpeed cost I did not time the backup but the file size for the backup is 31 gb for a backup of 260 gb which is a pretty hefty savings. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 2:11 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] LiteSpeed cost I did download the evaluation ad am testing it now. No restrictions are mentioned anywhere either on their site nor in the software as it runs. I am interested for the simple reason that the nVLDB I am dealing with is already way to big to backup in the normal manner and takes way to long as well. The web site says that huge space savings are available because they use compression and that the software also performs backups much faster, 2 to 3 times faster (or more). My database is already 260 gbytes and will grow as I add indexes to the table. I definitely need this kind of speed / size help if I am going to get the database backup happening automatically. I am currently manually zipping the files but that is not a good long term solution. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Jim Lawrence (AccessD) Sent: Sunday, September 19, 2004 1:53 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Hi John: Here is their web site: http://www.imceda.com/litespeed.asp?page=1. There is no price on their web site but I believe the cost is about $600.00. (Very pricey IMHO) You can download an evaluation copy but I am not sure whether it has been crippled or not. HTH Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 10:14 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] test Does anyone know the cost of LiteSpeed backup widget? John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 19, 2004 1:58 AM To: 'Access Developers discussion and problem solving'; dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] test Nothing today from any of our lists. Just a quiet day? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Tue Sep 21 19:58:35 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Wed, 22 Sep 2004 10:58:35 +1000 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <001601c4a03b$d4e8aeb0$e8dafea9@ColbyM6805> References: <000b01c49e80$534eeb60$e8dafea9@ColbyM6805> Message-ID: <41515ADB.11233.46078B2@lexacorp.com.pg> On 21 Sep 2004 at 20:33, John W. Colby wrote: > I have a situation where I need to have an iif construct that returns the > contents of one field if it exists, else return the other field. How do you > do something like that in SQL Server? Do I use a stored procedure? I need > this to run on potentially large datasets in something approaching real > time. > Take a look at CASE in BOL, it has several examples. And - how about doing a bit of trimming. Your last post contains the contents of your previous 4 unrelated requests plus one response to one of them, followed by 5 sets of DBA signature blocks (Of course, you don't get that sort of problem when people bottom post, they just naturally trim down to the relevant bits. ) -- Stuart From jwcolby at colbyconsulting.com Tue Sep 21 20:38:48 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Tue, 21 Sep 2004 21:38:48 -0400 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <41515ADB.11233.46078B2@lexacorp.com.pg> Message-ID: <001701c4a044$eb0dffc0$e8dafea9@ColbyM6805> >And - how about doing a bit of trimming. Your last post contains the contents of your previous 4 unrelated requests plus one response to one of them, followed by 5 sets of DBA signature blocks Sorry, my Outlook is totally screwed up with two sets of address books, one of which works, the other doesn't. Haven't had time to look at why, but as a result in order to post I just grab an email and reply. I try to trim up to my sig but sometimes I forget. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Stuart McLachlan Sent: Tuesday, September 21, 2004 8:59 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Iif in SQL Server On 21 Sep 2004 at 20:33, John W. Colby wrote: > I have a situation where I need to have an iif construct that returns > the contents of one field if it exists, else return the other field. > How do you do something like that in SQL Server? Do I use a stored > procedure? I need this to run on potentially large datasets in > something approaching real time. > Take a look at CASE in BOL, it has several examples. And - how about doing a bit of trimming. Your last post contains the contents of your previous 4 unrelated requests plus one response to one of them, followed by 5 sets of DBA signature blocks (Of course, you don't get that sort of problem when people bottom post, they just naturally trim down to the relevant bits. ) -- Stuart From stuart at lexacorp.com.pg Tue Sep 21 20:45:36 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Wed, 22 Sep 2004 11:45:36 +1000 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <001701c4a044$eb0dffc0$e8dafea9@ColbyM6805> References: <41515ADB.11233.46078B2@lexacorp.com.pg> Message-ID: <415165E0.16725.48B8488@lexacorp.com.pg> On 21 Sep 2004 at 21:38, John W. Colby wrote: > Sorry, my Outlook is totally screwed up That's tautology. -- Stuart From fhtapia at gmail.com Wed Sep 22 00:30:28 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Tue, 21 Sep 2004 22:30:28 -0700 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <415165E0.16725.48B8488@lexacorp.com.pg> References: <41515ADB.11233.46078B2@lexacorp.com.pg> <001701c4a044$eb0dffc0$e8dafea9@ColbyM6805> <415165E0.16725.48B8488@lexacorp.com.pg> Message-ID: JOHN! I know you're busy these days but it works like this SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue WHEN Fieldothercase = 'othervalue THEN FieldWhenOtherValue ELSE FieldwhenELSE END AS AliasName, Next Field >From TableName On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan wrote: > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > Sorry, my Outlook is totally screwed up > > That's tautology. > > > > > -- > Stuart > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org From jwcolby at colbyconsulting.com Wed Sep 22 07:39:44 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 22 Sep 2004 08:39:44 -0400 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: Message-ID: <002601c4a0a1$3fb6cd20$e8dafea9@ColbyM6805> Are we talking about code in an Sproc here? I am not talking about AccessVB. I am looking for something to work with 165 million records and return results in a few minutes. In the end I built a table with a PK field, a ZIPCode field and a Src field. I made the PK field a PK so that it has a unique index. I then wrote two queries, one that pulls data from one field plus the PK plus a 1 as Src and appends it to my new table. The other query pulls data from the other field plus the PK plus a 2 as the Src and appends to the table. Run the first query. All records that have anything in the ZIP code are put in the table. Run the second query. All the records that have something in the second ZIP field but aren't already in the table get put in the table. Now I have a table with ONE field, with data from one or the other field, with a column which tells me which source field it came from, with a PK to join back up to the main table. Of course my second query is failing to append at all because the PK already exists in the new table. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Wednesday, September 22, 2004 1:30 AM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Iif in SQL Server JOHN! I know you're busy these days but it works like this SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue WHEN Fieldothercase = 'othervalue THEN FieldWhenOtherValue ELSE FieldwhenELSE END AS AliasName, Next Field >From TableName On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan wrote: > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > Sorry, my Outlook is totally screwed up > > That's tautology. > > > > > -- > Stuart > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Wed Sep 22 07:47:42 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 22 Sep 2004 08:47:42 -0400 Subject: [dba-SQLServer] Violation of primary key constraint In-Reply-To: Message-ID: <002701c4a0a2$5e378720$e8dafea9@ColbyM6805> I built a table that I want to dump records in to. The source data is the PK of my big table plus one of two different zip code fields. Two queries, all the records - PKs plus ZIP - from any records with a Zip in field A. The second all the records - PK plus Zip - with any Zip in field B. Append query A, append query B. In access, Query B would append all the records where there was not a collision with the PKs already in the table. In SQL Server the entire second query just fails because of a collision, giving me a "Violation of primary key constraint, statement terminated". As Bill Cosby says in one of his wonderful acts, "brain damaged children". 'Scuse me, I WANT the primary key constraint to prevent records from going in but I also want those records without a violation to go in. So how do I override this brain damaged child and tell it to accept those records that do not violate the PK constraint? John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Wed Sep 22 07:56:24 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 22 Sep 2004 08:56:24 -0400 Subject: [dba-SQLServer] Violation of primary key constraint In-Reply-To: <002701c4a0a2$5e378720$e8dafea9@ColbyM6805> Message-ID: <002801c4a0a3$942df610$e8dafea9@ColbyM6805> Well I just discovered the <> operator in the Join in SQL Server. I think this will solve my problem since I just back up and pull only the records with a ZIP in field B where the PK <> PK in the destination table. NO idea what implication this has for execution time though. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Wednesday, September 22, 2004 8:48 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Violation of primary key constraint I built a table that I want to dump records in to. The source data is the PK of my big table plus one of two different zip code fields. Two queries, all the records - PKs plus ZIP - from any records with a Zip in field A. The second all the records - PK plus Zip - with any Zip in field B. Append query A, append query B. In access, Query B would append all the records where there was not a collision with the PKs already in the table. In SQL Server the entire second query just fails because of a collision, giving me a "Violation of primary key constraint, statement terminated". As Bill Cosby says in one of his wonderful acts, "brain damaged children". 'Scuse me, I WANT the primary key constraint to prevent records from going in but I also want those records without a violation to go in. So how do I override this brain damaged child and tell it to accept those records that do not violate the PK constraint? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From mikedorism at adelphia.net Wed Sep 22 08:52:22 2004 From: mikedorism at adelphia.net (Mike & Doris Manning) Date: Wed, 22 Sep 2004 09:52:22 -0400 Subject: [dba-SQLServer] Violation of primary key constraint In-Reply-To: <002801c4a0a3$942df610$e8dafea9@ColbyM6805> Message-ID: <000001c4a0ab$62cc3b60$060aa845@hargrove.internal> Take a look at the NOT IN topic in BOL. Doris Manning Database Administrator Hargrove Inc. www.hargroveinc.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Wednesday, September 22, 2004 8:56 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Violation of primary key constraint Well I just discovered the <> operator in the Join in SQL Server. I think this will solve my problem since I just back up and pull only the records with a ZIP in field B where the PK <> PK in the destination table. NO idea what implication this has for execution time though. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Wednesday, September 22, 2004 8:48 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Violation of primary key constraint I built a table that I want to dump records in to. The source data is the PK of my big table plus one of two different zip code fields. Two queries, all the records - PKs plus ZIP - from any records with a Zip in field A. The second all the records - PK plus Zip - with any Zip in field B. Append query A, append query B. In access, Query B would append all the records where there was not a collision with the PKs already in the table. In SQL Server the entire second query just fails because of a collision, giving me a "Violation of primary key constraint, statement terminated". As Bill Cosby says in one of his wonderful acts, "brain damaged children". 'Scuse me, I WANT the primary key constraint to prevent records from going in but I also want those records without a violation to go in. So how do I override this brain damaged child and tell it to accept those records that do not violate the PK constraint? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Wed Sep 22 10:40:18 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 22 Sep 2004 11:40:18 -0400 Subject: [dba-SQLServer] Help! Violation of primary key constraint In-Reply-To: <002801c4a0a3$942df610$e8dafea9@ColbyM6805> Message-ID: <000001c4a0ba$77bb47a0$e8dafea9@ColbyM6805> In the nVLDB database I am working on, there are two sets of addresses, one that the original person supplied, another that comes from the post office when that person moves. I need to build a single table where I pull the zip (and later the rest of the fields) from the post office supplied field if there is any data in there, then where no PO supplied data, pull the data from the original address fields. I have a PK field in my big table, int, identity. In my new address table, I created a matching PK field which I set to be the PK, not identity but with a unique index. I then add a zip field and a byte Src field to hold a number that says which field the source data came from in the big table - 1=PO updated address data, 2=original data. I built a view to pull the PK and the PO supplied zip from the big table and a 1 for the SRC field, then built a query to append that to the new table I am building. That worked and I now have about 10 million records in this new table with data supplied by the Post Office saying the person moved and this is their new address. I built another query to pull the PK and the zip from the original address field. I THOUGHT I could just append that data into the new table and where there was a collision in the PK (the data was already in the new table from the Post Office) the record would drop on the floor, all others would go in the new table. This doesn't work for some reason. As soon as I try to append a record with a PK collision the whole thing aborts with a "violation of PK constraint" error message. FIRST QUESTION: Why does that happen and is there a way to cause it to continue processing so that the records where the PKs collide just drop out? Given the failure of that method, I thought I'd build a "not in" query like I would do in Access, where I would pull all the records where the PK in the Big table is not found in the new table. SQL Server didn't like my syntax on that one. So... I built a query where I joined the big table on the new table on PK with the <> operator in the join, figuring I would get a set of records from the big table where none of the PKs were in the new table. An hour later that query is still running.... It seems like such a simple thing. I'm sure you hear this a lot, but in Access I would have had it done hours ago. Well... I'd have accomplished the task hours ago. With 165 million records Access itself would NEVER have finished the task (which is why I'm here). So, how do I accomplish what I am after here. John W. Colby www.ColbyConsulting.com From fhtapia at gmail.com Wed Sep 22 13:05:53 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Wed, 22 Sep 2004 11:05:53 -0700 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <002601c4a0a1$3fb6cd20$e8dafea9@ColbyM6805> References: <002601c4a0a1$3fb6cd20$e8dafea9@ColbyM6805> Message-ID: You want a report right? ie a resultset... yes a sproc John.. you don't have to CREATE a sproc, but doing so will store the optimizations for the sproc in the server and possibly even caching parts of the report (since the data hasn't changed). to create a sproc CREATE PROCEDURE stp_MyNewSprocNamingConvention AS SELECT FIELD1, Field2, Field3, Case... FROM Table1 WHERE ClauseoptionsHere On Wed, 22 Sep 2004 08:39:44 -0400, John W. Colby wrote: > Are we talking about code in an Sproc here? I am not talking about > AccessVB. I am looking for something to work with 165 million records and > return results in a few minutes. > > In the end I built a table with a PK field, a ZIPCode field and a Src field. > I made the PK field a PK so that it has a unique index. I then wrote two > queries, one that pulls data from one field plus the PK plus a 1 as Src and > appends it to my new table. The other query pulls data from the other field > plus the PK plus a 2 as the Src and appends to the table. Run the first > query. All records that have anything in the ZIP code are put in the table. > Run the second query. All the records that have something in the second ZIP > field but aren't already in the table get put in the table. > > Now I have a table with ONE field, with data from one or the other field, > with a column which tells me which source field it came from, with a PK to > join back up to the main table. > > Of course my second query is failing to append at all because the PK already > exists in the new table. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco > Tapia > Sent: Wednesday, September 22, 2004 1:30 AM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Iif in SQL Server > > JOHN! > > I know you're busy these days but it works like this > > SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue > WHEN Fieldothercase = 'othervalue THEN > FieldWhenOtherValue > ELSE FieldwhenELSE > END AS AliasName, > Next Field > > >From TableName > > On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan > wrote: > > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > > > Sorry, my Outlook is totally screwed up > > > > That's tautology. -- -Francisco http://ft316db.VOTEorNOT.org From jwcolby at colbyconsulting.com Wed Sep 22 13:48:10 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 22 Sep 2004 14:48:10 -0400 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: Message-ID: <000601c4a0d4$b95da710$e8dafea9@ColbyM6805> No, I don't want a report, at least not atm. I need the ability to count these things over and over and over. Thus it has to be FAST. Clients call and ask "how many addresses in these 230 zip codes? That kind of stuff. The zips can come from two different places, thus I want to pull them out and place them in a single table where I can tell where they came from if I need to but I only have to do a count on a single indexed column. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Wednesday, September 22, 2004 2:06 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Iif in SQL Server You want a report right? ie a resultset... yes a sproc John.. you don't have to CREATE a sproc, but doing so will store the optimizations for the sproc in the server and possibly even caching parts of the report (since the data hasn't changed). to create a sproc CREATE PROCEDURE stp_MyNewSprocNamingConvention AS SELECT FIELD1, Field2, Field3, Case... FROM Table1 WHERE ClauseoptionsHere On Wed, 22 Sep 2004 08:39:44 -0400, John W. Colby wrote: > Are we talking about code in an Sproc here? I am not talking about > AccessVB. I am looking for something to work with 165 million records > and return results in a few minutes. > > In the end I built a table with a PK field, a ZIPCode field and a Src > field. I made the PK field a PK so that it has a unique index. I then > wrote two queries, one that pulls data from one field plus the PK plus > a 1 as Src and appends it to my new table. The other query pulls data > from the other field plus the PK plus a 2 as the Src and appends to > the table. Run the first query. All records that have anything in > the ZIP code are put in the table. Run the second query. All the > records that have something in the second ZIP field but aren't already > in the table get put in the table. > > Now I have a table with ONE field, with data from one or the other > field, with a column which tells me which source field it came from, > with a PK to join back up to the main table. > > Of course my second query is failing to append at all because the PK > already exists in the new table. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > Francisco Tapia > Sent: Wednesday, September 22, 2004 1:30 AM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Iif in SQL Server > > JOHN! > > I know you're busy these days but it works like this > > SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue > WHEN Fieldothercase = 'othervalue THEN > FieldWhenOtherValue > ELSE FieldwhenELSE > END AS AliasName, > Next Field > > >From TableName > > On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan > wrote: > > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > > > Sorry, my Outlook is totally screwed up > > > > That's tautology. -- -Francisco http://ft316db.VOTEorNOT.org _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From rl_stewart at highstream.net Wed Sep 22 13:58:29 2004 From: rl_stewart at highstream.net (Robert L. Stewart) Date: Wed, 22 Sep 2004 13:58:29 -0500 Subject: [dba-SQLServer] Re: Help! Violation of primary key constraint In-Reply-To: <200409221701.i8MH1gL06759@databaseadvisors.com> Message-ID: <5.1.0.14.2.20040922135645.01634008@pop3.highstream.net> John, This is something that I would normally use DTS to do. You can set the number of errors allowed and then set the commit for a certain number of records. You could do the join in the "source" so that you would be limited from the beginning to only records that were not in the table already. Robert At 12:01 PM 9/22/2004 -0500, you wrote: >Date: Wed, 22 Sep 2004 11:40:18 -0400 >From: "John W. Colby" >Subject: [dba-SQLServer] Help! Violation of primary key constraint >To: dba-sqlserver at databaseadvisors.com >Message-ID: <000001c4a0ba$77bb47a0$e8dafea9 at ColbyM6805> >Content-Type: text/plain; charset=US-ASCII > >In the nVLDB database I am working on, there are two sets of addresses, one >that the original person supplied, another that comes from the post office >when that person moves. I need to build a single table where I pull the zip >(and later the rest of the fields) from the post office supplied field if >there is any data in there, then where no PO supplied data, pull the data >from the original address fields. > >I have a PK field in my big table, int, identity. In my new address table, >I created a matching PK field which I set to be the PK, not identity but >with a unique index. I then add a zip field and a byte Src field to hold a >number that says which field the source data came from in the big table - >1=PO updated address data, 2=original data. > >I built a view to pull the PK and the PO supplied zip from the big table and >a 1 for the SRC field, then built a query to append that to the new table I >am building. That worked and I now have about 10 million records in this >new table with data supplied by the Post Office saying the person moved and >this is their new address. > >I built another query to pull the PK and the zip from the original address >field. I THOUGHT I could just append that data into the new table and where >there was a collision in the PK (the data was already in the new table from >the Post Office) the record would drop on the floor, all others would go in >the new table. > >This doesn't work for some reason. As soon as I try to append a record with >a PK collision the whole thing aborts with a "violation of PK constraint" >error message. > >FIRST QUESTION: Why does that happen and is there a way to cause it to >continue processing so that the records where the PKs collide just drop out? > >Given the failure of that method, I thought I'd build a "not in" query like >I would do in Access, where I would pull all the records where the PK in the >Big table is not found in the new table. SQL Server didn't like my syntax >on that one. > >So... I built a query where I joined the big table on the new table on PK >with the <> operator in the join, figuring I would get a set of records from >the big table where none of the PKs were in the new table. An hour later >that query is still running.... > >It seems like such a simple thing. I'm sure you hear this a lot, but in >Access I would have had it done hours ago. Well... I'd have accomplished >the task hours ago. With 165 million records Access itself would NEVER have >finished the task (which is why I'm here). > >So, how do I accomplish what I am after here. > >John W. Colby From rl_stewart at highstream.net Wed Sep 22 14:01:41 2004 From: rl_stewart at highstream.net (Robert L. Stewart) Date: Wed, 22 Sep 2004 14:01:41 -0500 Subject: [dba-SQLServer] Re: Iif in SQL Server In-Reply-To: <200409221242.i8MCg7L10628@databaseadvisors.com> Message-ID: <5.1.0.14.2.20040922140048.0141f7c0@pop3.highstream.net> John, No, CASE is valid in a SQL statement. So, you can use it in place of an IIF statement. Robert At 07:42 AM 9/22/2004 -0500, you wrote: >Date: Wed, 22 Sep 2004 08:39:44 -0400 >From: "John W. Colby" >Subject: RE: [dba-SQLServer] Iif in SQL Server >To: dba-sqlserver at databaseadvisors.com >Message-ID: <002601c4a0a1$3fb6cd20$e8dafea9 at ColbyM6805> >Content-Type: text/plain; charset=us-ascii > >Are we talking about code in an Sproc here? I am not talking about >AccessVB. I am looking for something to work with 165 million records and >return results in a few minutes. > >In the end I built a table with a PK field, a ZIPCode field and a Src field. >I made the PK field a PK so that it has a unique index. I then wrote two >queries, one that pulls data from one field plus the PK plus a 1 as Src and >appends it to my new table. The other query pulls data from the other field >plus the PK plus a 2 as the Src and appends to the table. Run the first >query. All records that have anything in the ZIP code are put in the table. >Run the second query. All the records that have something in the second ZIP >field but aren't already in the table get put in the table. > >Now I have a table with ONE field, with data from one or the other field, >with a column which tells me which source field it came from, with a PK to >join back up to the main table. > >Of course my second query is failing to append at all because the PK already >exists in the new table. > >John W. Colby From fhtapia at gmail.com Wed Sep 22 14:20:03 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Wed, 22 Sep 2004 12:20:03 -0700 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <000601c4a0d4$b95da710$e8dafea9@ColbyM6805> References: <000601c4a0d4$b95da710$e8dafea9@ColbyM6805> Message-ID: So is that gonna work for you? On Wed, 22 Sep 2004 14:48:10 -0400, John W. Colby wrote: > No, I don't want a report, at least not atm. I need the ability to count > these things over and over and over. Thus it has to be FAST. Clients call > and ask "how many addresses in these 230 zip codes? That kind of stuff. > The zips can come from two different places, thus I want to pull them out > and place them in a single table where I can tell where they came from if I > need to but I only have to do a count on a single indexed column. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco > Tapia > Sent: Wednesday, September 22, 2004 2:06 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Iif in SQL Server > > You want a report right? ie a resultset... yes a sproc John.. you don't have > to CREATE a sproc, but doing so will store the optimizations for the sproc > in the server and possibly even caching parts of the report (since the data > hasn't changed). > > to create a sproc > > CREATE PROCEDURE stp_MyNewSprocNamingConvention > > AS > > SELECT FIELD1, Field2, Field3, Case... > FROM Table1 > WHERE ClauseoptionsHere > > On Wed, 22 Sep 2004 08:39:44 -0400, John W. Colby > wrote: > > Are we talking about code in an Sproc here? I am not talking about > > AccessVB. I am looking for something to work with 165 million records > > and return results in a few minutes. > > > > In the end I built a table with a PK field, a ZIPCode field and a Src > > field. I made the PK field a PK so that it has a unique index. I then > > wrote two queries, one that pulls data from one field plus the PK plus > > a 1 as Src and appends it to my new table. The other query pulls data > > from the other field plus the PK plus a 2 as the Src and appends to > > the table. Run the first query. All records that have anything in > > the ZIP code are put in the table. Run the second query. All the > > records that have something in the second ZIP field but aren't already > > in the table get put in the table. > > > > Now I have a table with ONE field, with data from one or the other > > field, with a column which tells me which source field it came from, > > with a PK to join back up to the main table. > > > > Of course my second query is failing to append at all because the PK > > already exists in the new table. > > > > John W. Colby > > www.ColbyConsulting.com > > > > > > > > -----Original Message----- > > From: dba-sqlserver-bounces at databaseadvisors.com > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > Francisco Tapia > > Sent: Wednesday, September 22, 2004 1:30 AM > > To: dba-sqlserver at databaseadvisors.com > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > JOHN! > > > > I know you're busy these days but it works like this > > > > SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue > > WHEN Fieldothercase = 'othervalue THEN > > FieldWhenOtherValue > > ELSE FieldwhenELSE > > END AS AliasName, > > Next Field > > > > >From TableName > > > > On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan > > wrote: > > > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > > > > > Sorry, my Outlook is totally screwed up > > > > > > That's tautology. -- -Francisco http://ft316db.VOTEorNOT.org From jwcolby at colbyconsulting.com Wed Sep 22 14:34:21 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 22 Sep 2004 15:34:21 -0400 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: Message-ID: <000001c4a0db$2bd8b180$e8dafea9@ColbyM6805> It would work IF I could get a second set of records to append to the table. ATM the second append query immediately bombs with the "violation of primary key constraint" error. It appears that the second append (correctly) attempts to append SOME records with a PK already in the table. What I want is for the append query to silently move on to the next record when that happens until it finds records where the PK ISN'T already in the table, then it appends THOSE records. What happens is that the entire query halts because of the first PK collision. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Wednesday, September 22, 2004 3:20 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Iif in SQL Server So is that gonna work for you? On Wed, 22 Sep 2004 14:48:10 -0400, John W. Colby wrote: > No, I don't want a report, at least not atm. I need the ability to > count these things over and over and over. Thus it has to be FAST. > Clients call and ask "how many addresses in these 230 zip codes? That > kind of stuff. The zips can come from two different places, thus I > want to pull them out and place them in a single table where I can > tell where they came from if I need to but I only have to do a count > on a single indexed column. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > Francisco Tapia > Sent: Wednesday, September 22, 2004 2:06 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Iif in SQL Server > > You want a report right? ie a resultset... yes a sproc John.. you > don't have to CREATE a sproc, but doing so will store the > optimizations for the sproc in the server and possibly even caching > parts of the report (since the data hasn't changed). > > to create a sproc > > CREATE PROCEDURE stp_MyNewSprocNamingConvention > > AS > > SELECT FIELD1, Field2, Field3, Case... > FROM Table1 > WHERE ClauseoptionsHere > > On Wed, 22 Sep 2004 08:39:44 -0400, John W. Colby > wrote: > > Are we talking about code in an Sproc here? I am not talking about > > AccessVB. I am looking for something to work with 165 million > > records and return results in a few minutes. > > > > In the end I built a table with a PK field, a ZIPCode field and a > > Src field. I made the PK field a PK so that it has a unique index. > > I then wrote two queries, one that pulls data from one field plus > > the PK plus a 1 as Src and appends it to my new table. The other > > query pulls data from the other field plus the PK plus a 2 as the > > Src and appends to the table. Run the first query. All records > > that have anything in the ZIP code are put in the table. Run the > > second query. All the records that have something in the second ZIP > > field but aren't already in the table get put in the table. > > > > Now I have a table with ONE field, with data from one or the other > > field, with a column which tells me which source field it came from, > > with a PK to join back up to the main table. > > > > Of course my second query is failing to append at all because the PK > > already exists in the new table. > > > > John W. Colby > > www.ColbyConsulting.com > > > > > > > > -----Original Message----- > > From: dba-sqlserver-bounces at databaseadvisors.com > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > Francisco Tapia > > Sent: Wednesday, September 22, 2004 1:30 AM > > To: dba-sqlserver at databaseadvisors.com > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > JOHN! > > > > I know you're busy these days but it works like this > > > > SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue > > WHEN Fieldothercase = 'othervalue THEN > > FieldWhenOtherValue > > ELSE FieldwhenELSE > > END AS AliasName, > > Next Field > > > > >From TableName > > > > On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan > > wrote: > > > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > > > > > Sorry, my Outlook is totally screwed up > > > > > > That's tautology. -- -Francisco http://ft316db.VOTEorNOT.org _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Wed Sep 22 15:08:57 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Wed, 22 Sep 2004 13:08:57 -0700 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <000001c4a0db$2bd8b180$e8dafea9@ColbyM6805> References: <000001c4a0db$2bd8b180$e8dafea9@ColbyM6805> Message-ID: A solution is for your APPEND query (SELECT statement) to include a clause that excludes the PK's Something like SELECT FIeld1, Field2, ... FROM table WHERE PK NOT EXISTS(SELECT PK From PrimaryTable) On Wed, 22 Sep 2004 15:34:21 -0400, John W. Colby wrote: > It would work IF I could get a second set of records to append to the table. > ATM the second append query immediately bombs with the "violation of primary > key constraint" error. It appears that the second append (correctly) > attempts to append SOME records with a PK already in the table. What I want > is for the append query to silently move on to the next record when that > happens until it finds records where the PK ISN'T already in the table, then > it appends THOSE records. What happens is that the entire query halts > because of the first PK collision. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco > Tapia > Sent: Wednesday, September 22, 2004 3:20 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Iif in SQL Server > > So is that gonna work for you? > > On Wed, 22 Sep 2004 14:48:10 -0400, John W. Colby > wrote: > > No, I don't want a report, at least not atm. I need the ability to > > count these things over and over and over. Thus it has to be FAST. > > Clients call and ask "how many addresses in these 230 zip codes? That > > kind of stuff. The zips can come from two different places, thus I > > want to pull them out and place them in a single table where I can > > tell where they came from if I need to but I only have to do a count > > on a single indexed column. > > > > John W. Colby > > www.ColbyConsulting.com > > > > > > > > -----Original Message----- > > From: dba-sqlserver-bounces at databaseadvisors.com > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > Francisco Tapia > > Sent: Wednesday, September 22, 2004 2:06 PM > > To: dba-sqlserver at databaseadvisors.com > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > You want a report right? ie a resultset... yes a sproc John.. you > > don't have to CREATE a sproc, but doing so will store the > > optimizations for the sproc in the server and possibly even caching > > parts of the report (since the data hasn't changed). > > > > to create a sproc > > > > CREATE PROCEDURE stp_MyNewSprocNamingConvention > > > > AS > > > > SELECT FIELD1, Field2, Field3, Case... > > FROM Table1 > > WHERE ClauseoptionsHere > > > > On Wed, 22 Sep 2004 08:39:44 -0400, John W. Colby > > wrote: > > > Are we talking about code in an Sproc here? I am not talking about > > > AccessVB. I am looking for something to work with 165 million > > > records and return results in a few minutes. > > > > > > In the end I built a table with a PK field, a ZIPCode field and a > > > Src field. I made the PK field a PK so that it has a unique index. > > > I then wrote two queries, one that pulls data from one field plus > > > the PK plus a 1 as Src and appends it to my new table. The other > > > query pulls data from the other field plus the PK plus a 2 as the > > > Src and appends to the table. Run the first query. All records > > > that have anything in the ZIP code are put in the table. Run the > > > second query. All the records that have something in the second ZIP > > > field but aren't already in the table get put in the table. > > > > > > Now I have a table with ONE field, with data from one or the other > > > field, with a column which tells me which source field it came from, > > > with a PK to join back up to the main table. > > > > > > Of course my second query is failing to append at all because the PK > > > already exists in the new table. > > > > > > John W. Colby > > > www.ColbyConsulting.com > > > > > > > > > > > > -----Original Message----- > > > From: dba-sqlserver-bounces at databaseadvisors.com > > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > > Francisco Tapia > > > Sent: Wednesday, September 22, 2004 1:30 AM > > > To: dba-sqlserver at databaseadvisors.com > > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > > > JOHN! > > > > > > I know you're busy these days but it works like this > > > > > > SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue > > > WHEN Fieldothercase = 'othervalue THEN > > > FieldWhenOtherValue > > > ELSE FieldwhenELSE > > > END AS AliasName, > > > Next Field > > > > > > >From TableName > > > > > > On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan > > > wrote: > > > > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > > > > > > > Sorry, my Outlook is totally screwed up > > > > > > > > That's tautology. > > -- > -Francisco > http://ft316db.VOTEorNOT.org _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org From jwcolby at colbyconsulting.com Wed Sep 22 16:29:31 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Wed, 22 Sep 2004 17:29:31 -0400 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: Message-ID: <000101c4a0eb$42c2adf0$e8dafea9@ColbyM6805> Haaa... That is it. The not exists clause. I'll try it out and let you know if that does what I need. Thanks, John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Wednesday, September 22, 2004 4:09 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Iif in SQL Server A solution is for your APPEND query (SELECT statement) to include a clause that excludes the PK's Something like SELECT FIeld1, Field2, ... FROM table WHERE PK NOT EXISTS(SELECT PK From PrimaryTable) On Wed, 22 Sep 2004 15:34:21 -0400, John W. Colby wrote: > It would work IF I could get a second set of records to append to the > table. ATM the second append query immediately bombs with the > "violation of primary key constraint" error. It appears that the > second append (correctly) attempts to append SOME records with a PK > already in the table. What I want is for the append query to silently > move on to the next record when that happens until it finds records > where the PK ISN'T already in the table, then it appends THOSE > records. What happens is that the entire query halts because of the > first PK collision. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > Francisco Tapia > Sent: Wednesday, September 22, 2004 3:20 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Iif in SQL Server > > So is that gonna work for you? > > On Wed, 22 Sep 2004 14:48:10 -0400, John W. Colby > wrote: > > No, I don't want a report, at least not atm. I need the ability to > > count these things over and over and over. Thus it has to be FAST. > > Clients call and ask "how many addresses in these 230 zip codes? > > That kind of stuff. The zips can come from two different places, > > thus I want to pull them out and place them in a single table where > > I can tell where they came from if I need to but I only have to do a > > count on a single indexed column. > > > > John W. Colby > > www.ColbyConsulting.com > > > > > > > > -----Original Message----- > > From: dba-sqlserver-bounces at databaseadvisors.com > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > Francisco Tapia > > Sent: Wednesday, September 22, 2004 2:06 PM > > To: dba-sqlserver at databaseadvisors.com > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > You want a report right? ie a resultset... yes a sproc John.. you > > don't have to CREATE a sproc, but doing so will store the > > optimizations for the sproc in the server and possibly even caching > > parts of the report (since the data hasn't changed). > > > > to create a sproc > > > > CREATE PROCEDURE stp_MyNewSprocNamingConvention > > > > AS > > > > SELECT FIELD1, Field2, Field3, Case... > > FROM Table1 > > WHERE ClauseoptionsHere > > > > On Wed, 22 Sep 2004 08:39:44 -0400, John W. Colby > > wrote: > > > Are we talking about code in an Sproc here? I am not talking > > > about AccessVB. I am looking for something to work with 165 > > > million records and return results in a few minutes. > > > > > > In the end I built a table with a PK field, a ZIPCode field and a > > > Src field. I made the PK field a PK so that it has a unique index. > > > I then wrote two queries, one that pulls data from one field plus > > > the PK plus a 1 as Src and appends it to my new table. The other > > > query pulls data from the other field plus the PK plus a 2 as the > > > Src and appends to the table. Run the first query. All records > > > that have anything in the ZIP code are put in the table. Run the > > > second query. All the records that have something in the second > > > ZIP field but aren't already in the table get put in the table. > > > > > > Now I have a table with ONE field, with data from one or the other > > > field, with a column which tells me which source field it came > > > from, with a PK to join back up to the main table. > > > > > > Of course my second query is failing to append at all because the > > > PK already exists in the new table. > > > > > > John W. Colby > > > www.ColbyConsulting.com > > > > > > > > > > > > -----Original Message----- > > > From: dba-sqlserver-bounces at databaseadvisors.com > > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > > Francisco Tapia > > > Sent: Wednesday, September 22, 2004 1:30 AM > > > To: dba-sqlserver at databaseadvisors.com > > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > > > JOHN! > > > > > > I know you're busy these days but it works like this > > > > > > SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue > > > WHEN Fieldothercase = 'othervalue THEN > > > FieldWhenOtherValue > > > ELSE FieldwhenELSE > > > END AS AliasName, > > > Next Field > > > > > > >From TableName > > > > > > On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan > > > wrote: > > > > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > > > > > > > Sorry, my Outlook is totally screwed up > > > > > > > > That's tautology. > > -- > -Francisco > http://ft316db.VOTEorNOT.org > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From artful at rogers.com Wed Sep 22 19:13:14 2004 From: artful at rogers.com (Arthur Fuller) Date: Wed, 22 Sep 2004 20:13:14 -0400 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: Message-ID: <00b401c4a102$1ef7f670$6501a8c0@rock> That will probably work but with that many rows it's going to take days. Try a JOIN instead, JC, something along the lines of: Select * from T1 OUTER JOIN T2 on T1.PK = T2.PK WHERE T2.AddressColumn IS NULL; A. -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Wednesday, September 22, 2004 4:09 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Iif in SQL Server A solution is for your APPEND query (SELECT statement) to include a clause that excludes the PK's Something like SELECT FIeld1, Field2, ... FROM table WHERE PK NOT EXISTS(SELECT PK From PrimaryTable) On Wed, 22 Sep 2004 15:34:21 -0400, John W. Colby wrote: > It would work IF I could get a second set of records to append to the > table. ATM the second append query immediately bombs with the > "violation of primary key constraint" error. It appears that the > second append (correctly) attempts to append SOME records with a PK > already in the table. What I want is for the append query to silently > move on to the next record when that happens until it finds records > where the PK ISN'T already in the table, then it appends THOSE > records. What happens is that the entire query halts because of the > first PK collision. > > John W. Colby > www.ColbyConsulting.com > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > Francisco Tapia > Sent: Wednesday, September 22, 2004 3:20 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Iif in SQL Server > > So is that gonna work for you? > > On Wed, 22 Sep 2004 14:48:10 -0400, John W. Colby > wrote: > > No, I don't want a report, at least not atm. I need the ability to > > count these things over and over and over. Thus it has to be FAST. > > Clients call and ask "how many addresses in these 230 zip codes? > > That kind of stuff. The zips can come from two different places, > > thus I want to pull them out and place them in a single table where > > I can tell where they came from if I need to but I only have to do a > > count on a single indexed column. > > > > John W. Colby > > www.ColbyConsulting.com > > > > > > > > -----Original Message----- > > From: dba-sqlserver-bounces at databaseadvisors.com > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > Francisco Tapia > > Sent: Wednesday, September 22, 2004 2:06 PM > > To: dba-sqlserver at databaseadvisors.com > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > You want a report right? ie a resultset... yes a sproc John.. you > > don't have to CREATE a sproc, but doing so will store the > > optimizations for the sproc in the server and possibly even caching > > parts of the report (since the data hasn't changed). > > > > to create a sproc > > > > CREATE PROCEDURE stp_MyNewSprocNamingConvention > > > > AS > > > > SELECT FIELD1, Field2, Field3, Case... > > FROM Table1 > > WHERE ClauseoptionsHere > > > > On Wed, 22 Sep 2004 08:39:44 -0400, John W. Colby > > wrote: > > > Are we talking about code in an Sproc here? I am not talking > > > about AccessVB. I am looking for something to work with 165 > > > million records and return results in a few minutes. > > > > > > In the end I built a table with a PK field, a ZIPCode field and a > > > Src field. I made the PK field a PK so that it has a unique index. > > > I then wrote two queries, one that pulls data from one field plus > > > the PK plus a 1 as Src and appends it to my new table. The other > > > query pulls data from the other field plus the PK plus a 2 as the > > > Src and appends to the table. Run the first query. All records > > > that have anything in the ZIP code are put in the table. Run the > > > second query. All the records that have something in the second > > > ZIP field but aren't already in the table get put in the table. > > > > > > Now I have a table with ONE field, with data from one or the other > > > field, with a column which tells me which source field it came > > > from, with a PK to join back up to the main table. > > > > > > Of course my second query is failing to append at all because the > > > PK already exists in the new table. > > > > > > John W. Colby > > > www.ColbyConsulting.com > > > > > > > > > > > > -----Original Message----- > > > From: dba-sqlserver-bounces at databaseadvisors.com > > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > > Francisco Tapia > > > Sent: Wednesday, September 22, 2004 1:30 AM > > > To: dba-sqlserver at databaseadvisors.com > > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > > > JOHN! > > > > > > I know you're busy these days but it works like this > > > > > > SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue > > > WHEN Fieldothercase = 'othervalue THEN > > > FieldWhenOtherValue > > > ELSE FieldwhenELSE > > > END AS AliasName, > > > Next Field > > > > > > >From TableName > > > > > > On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan > > > wrote: > > > > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > > > > > > > Sorry, my Outlook is totally screwed up > > > > > > > > That's tautology. > > -- > -Francisco > http://ft316db.VOTEorNOT.org > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Wed Sep 22 21:58:18 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Wed, 22 Sep 2004 19:58:18 -0700 Subject: [dba-SQLServer] Iif in SQL Server In-Reply-To: <00b401c4a102$1ef7f670$6501a8c0@rock> References: <00b401c4a102$1ef7f670$6501a8c0@rock> Message-ID: A very good point... the Join clause is going to be MUCH faster in this situation.... thanks Arthur. On Wed, 22 Sep 2004 20:13:14 -0400, Arthur Fuller wrote: > That will probably work but with that many rows it's going to take days. > Try a JOIN instead, JC, something along the lines of: > > Select * from T1 OUTER JOIN T2 on T1.PK = T2.PK WHERE T2.AddressColumn > IS NULL; > > A. > > > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > Francisco Tapia > Sent: Wednesday, September 22, 2004 4:09 PM > To: dba-sqlserver at databaseadvisors.com > Subject: Re: [dba-SQLServer] Iif in SQL Server > > A solution is for your APPEND query (SELECT statement) to include a > clause that excludes the PK's > > Something like SELECT FIeld1, Field2, ... > FROM table > WHERE PK NOT EXISTS(SELECT PK From PrimaryTable) > > On Wed, 22 Sep 2004 15:34:21 -0400, John W. Colby > wrote: > > It would work IF I could get a second set of records to append to the > > table. ATM the second append query immediately bombs with the > > "violation of primary key constraint" error. It appears that the > > second append (correctly) attempts to append SOME records with a PK > > already in the table. What I want is for the append query to silently > > > move on to the next record when that happens until it finds records > > where the PK ISN'T already in the table, then it appends THOSE > > records. What happens is that the entire query halts because of the > > first PK collision. > > > > John W. Colby > > www.ColbyConsulting.com > > > > > > > > -----Original Message----- > > From: dba-sqlserver-bounces at databaseadvisors.com > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > Francisco Tapia > > Sent: Wednesday, September 22, 2004 3:20 PM > > To: dba-sqlserver at databaseadvisors.com > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > So is that gonna work for you? > > > > On Wed, 22 Sep 2004 14:48:10 -0400, John W. Colby > > wrote: > > > No, I don't want a report, at least not atm. I need the ability to > > > count these things over and over and over. Thus it has to be FAST. > > > Clients call and ask "how many addresses in these 230 zip codes? > > > That kind of stuff. The zips can come from two different places, > > > thus I want to pull them out and place them in a single table where > > > I can tell where they came from if I need to but I only have to do a > > > > count on a single indexed column. > > > > > > John W. Colby > > > www.ColbyConsulting.com > > > > > > > > > > > > -----Original Message----- > > > From: dba-sqlserver-bounces at databaseadvisors.com > > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > > Francisco Tapia > > > Sent: Wednesday, September 22, 2004 2:06 PM > > > To: dba-sqlserver at databaseadvisors.com > > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > > > You want a report right? ie a resultset... yes a sproc John.. you > > > don't have to CREATE a sproc, but doing so will store the > > > optimizations for the sproc in the server and possibly even caching > > > parts of the report (since the data hasn't changed). > > > > > > to create a sproc > > > > > > CREATE PROCEDURE stp_MyNewSprocNamingConvention > > > > > > AS > > > > > > SELECT FIELD1, Field2, Field3, Case... > > > FROM Table1 > > > WHERE ClauseoptionsHere > > > > > > On Wed, 22 Sep 2004 08:39:44 -0400, John W. Colby > > > wrote: > > > > Are we talking about code in an Sproc here? I am not talking > > > > about AccessVB. I am looking for something to work with 165 > > > > million records and return results in a few minutes. > > > > > > > > In the end I built a table with a PK field, a ZIPCode field and a > > > > Src field. I made the PK field a PK so that it has a unique index. > > > > > I then wrote two queries, one that pulls data from one field plus > > > > the PK plus a 1 as Src and appends it to my new table. The other > > > > query pulls data from the other field plus the PK plus a 2 as the > > > > Src and appends to the table. Run the first query. All records > > > > that have anything in the ZIP code are put in the table. Run the > > > > second query. All the records that have something in the second > > > > ZIP field but aren't already in the table get put in the table. > > > > > > > > Now I have a table with ONE field, with data from one or the other > > > > > field, with a column which tells me which source field it came > > > > from, with a PK to join back up to the main table. > > > > > > > > Of course my second query is failing to append at all because the > > > > PK already exists in the new table. > > > > > > > > John W. Colby > > > > www.ColbyConsulting.com > > > > > > > > > > > > > > > > -----Original Message----- > > > > From: dba-sqlserver-bounces at databaseadvisors.com > > > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > > > > Francisco Tapia > > > > Sent: Wednesday, September 22, 2004 1:30 AM > > > > To: dba-sqlserver at databaseadvisors.com > > > > Subject: Re: [dba-SQLServer] Iif in SQL Server > > > > > > > > JOHN! > > > > > > > > I know you're busy these days but it works like this > > > > > > > > SELECT CASE WHEN FIELD = 'value' THEN FieldWhenTrue > > > > WHEN Fieldothercase = 'othervalue THEN > > > > FieldWhenOtherValue > > > > ELSE FieldwhenELSE > > > > END AS AliasName, > > > > Next Field > > > > > > > > >From TableName > > > > > > > > On Wed, 22 Sep 2004 11:45:36 +1000, Stuart McLachlan > > > > wrote: > > > > > On 21 Sep 2004 at 21:38, John W. Colby wrote: > > > > > > > > > > > Sorry, my Outlook is totally screwed up > > > > > > > > > > That's tautology. > > > > -- > > -Francisco > > http://ft316db.VOTEorNOT.org > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > -- > -Francisco > http://ft316db.VOTEorNOT.org > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org From jwcolby at colbyconsulting.com Thu Sep 23 09:05:23 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Thu, 23 Sep 2004 10:05:23 -0400 Subject: [dba-SQLServer] Violation of primary key constraint In-Reply-To: <000001c4a0ab$62cc3b60$060aa845@hargrove.internal> Message-ID: <000a01c4a176$5f96d5b0$e8dafea9@ColbyM6805> I created a NOT IN (SELECT...) and that does work. An outer join per Arthur's suggestion is marginally faster. In fact I have now succeeded in the operation using the outer join method. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Mike & Doris Manning Sent: Wednesday, September 22, 2004 9:52 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Violation of primary key constraint Take a look at the NOT IN topic in BOL. Doris Manning Database Administrator Hargrove Inc. www.hargroveinc.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Wednesday, September 22, 2004 8:56 AM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Violation of primary key constraint Well I just discovered the <> operator in the Join in SQL Server. I think this will solve my problem since I just back up and pull only the records with a ZIP in field B where the PK <> PK in the destination table. NO idea what implication this has for execution time though. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Wednesday, September 22, 2004 8:48 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Violation of primary key constraint I built a table that I want to dump records in to. The source data is the PK of my big table plus one of two different zip code fields. Two queries, all the records - PKs plus ZIP - from any records with a Zip in field A. The second all the records - PK plus Zip - with any Zip in field B. Append query A, append query B. In access, Query B would append all the records where there was not a collision with the PKs already in the table. In SQL Server the entire second query just fails because of a collision, giving me a "Violation of primary key constraint, statement terminated". As Bill Cosby says in one of his wonderful acts, "brain damaged children". 'Scuse me, I WANT the primary key constraint to prevent records from going in but I also want those records without a violation to go in. So how do I override this brain damaged child and tell it to accept those records that do not violate the PK constraint? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Thu Sep 23 09:08:53 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Thu, 23 Sep 2004 10:08:53 -0400 Subject: [dba-SQLServer] "Data dictionary" In-Reply-To: <000001c4a0ab$62cc3b60$060aa845@hargrove.internal> Message-ID: <000b01c4a176$def02730$e8dafea9@ColbyM6805> My client wants what he refers to as a data dictionary. What he is referring to is a list of all the DISTINCT values in the data for a specific set of fields. Is there any (cheap / free) tool that can do this? I need a table or spreadsheet, with the field name, then the distinct values in that field. I can do this field by field by building queries but if there is something quick and easy it sure would help. John W. Colby www.ColbyConsulting.com From mikedorism at adelphia.net Thu Sep 23 09:37:07 2004 From: mikedorism at adelphia.net (Mike & Doris Manning) Date: Thu, 23 Sep 2004 10:37:07 -0400 Subject: [dba-SQLServer] "Data dictionary" In-Reply-To: <000b01c4a176$def02730$e8dafea9@ColbyM6805> Message-ID: <000001c4a17a$cdcbbce0$060aa845@hargrove.internal> Hi John, You can actually do this yourself with no queries involved. Here is a link to get you started. http://www.paragoncorporation.com/ArticleDetail.aspx?ArticleID=1 Doris Manning Database Administrator Hargrove Inc. www.hargroveinc.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Thursday, September 23, 2004 10:09 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] "Data dictionary" My client wants what he refers to as a data dictionary. What he is referring to is a list of all the DISTINCT values in the data for a specific set of fields. Is there any (cheap / free) tool that can do this? I need a table or spreadsheet, with the field name, then the distinct values in that field. I can do this field by field by building queries but if there is something quick and easy it sure would help. John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From rl_stewart at highstream.net Thu Sep 23 10:57:03 2004 From: rl_stewart at highstream.net (Robert L. Stewart) Date: Thu, 23 Sep 2004 10:57:03 -0500 Subject: [dba-SQLServer] Re: "Data dictionary" In-Reply-To: <200409231415.i8NEFRo18586@databaseadvisors.com> Message-ID: <5.1.0.14.2.20040923104348.013c2d60@pop3.highstream.net> John, Nothing that is free or cheap. The company I work for just dropped about 250,000 for a package to do something like this. Create a table to hold the data. 2 columns ColumnName and COlumnValue and CountOfValue then create a stored procedure with a parameter of @ColumnName for input create a SQL statement that will append the data into the table BTW, you could also use this table for the zip code count also. You will never be able to get good speed out of the other options because of the size of your DB. By running the SP and loading the data into a reporting table, you will be able to get extremely good performance for the reporting. Also, the reason for the count of each value is because, from my experience, they end up wanting to see how many of ones they consider a problem exist in the data. Robert At 09:15 AM 9/23/2004 -0500, you wrote: >Date: Thu, 23 Sep 2004 10:08:53 -0400 >From: "John W. Colby" >Subject: [dba-SQLServer] "Data dictionary" >To: dba-sqlserver at databaseadvisors.com >Message-ID: <000b01c4a176$def02730$e8dafea9 at ColbyM6805> >Content-Type: text/plain; charset=us-ascii > >My client wants what he refers to as a data dictionary. What he is >referring to is a list of all the DISTINCT values in the data for a specific >set of fields. Is there any (cheap / free) tool that can do this? I need a >table or spreadsheet, with the field name, then the distinct values in that >field. I can do this field by field by building queries but if there is >something quick and easy it sure would help. > >John W. Colby From rmoore at comtechpst.com Thu Sep 23 13:34:40 2004 From: rmoore at comtechpst.com (Ron Moore) Date: Thu, 23 Sep 2004 14:34:40 -0400 Subject: [dba-SQLServer] Deletion based on Compound Distinct Selection Message-ID: <200409231830.OAA06121@comtech.comtechpst.com> To All, I've been hammering away at this for awhile and am under the gun now. I am trying to SELECT DISTINCT from 2 columns in table1, and use the return set to DELETE all records in table2 where both of the same two columns are equal to one of the distinct combinations in the return set. I have tried using the following which works for 1 column but not 2 columns: DELETE FROM DB2.dbo.TABLE2 WHERE COLUMN1,COLUMN2 IN ( SELECT DISTINCT COLUMN1,COLUMN2 FROM DB1.dbo.TABLE1 T1 ) Error is: Line 4: Incorrect syntax near ','. Any thoughts? I have the 'distinct' feeling I'm going about this the hard way! TIA, Ron From tuxedo_man at hotmail.com Thu Sep 23 14:58:43 2004 From: tuxedo_man at hotmail.com (Billy Pang) Date: Thu, 23 Sep 2004 19:58:43 +0000 Subject: [dba-SQLServer] Violation of primary key constraint Message-ID: If we are only talking about primary keys, "Not in" should have a similiar execution plan when compared to the "outer join" method except that the "outer join" method uses an extra filter in the execution plan to get the same results. So "Not in" is theoretically supposed to a molecule faster than "outer join". Billy >From: "John W. Colby" >Reply-To: dba-sqlserver at databaseadvisors.com >To: dba-sqlserver at databaseadvisors.com >Subject: RE: [dba-SQLServer] Violation of primary key constraint >Date: Thu, 23 Sep 2004 10:05:23 -0400 > >I created a NOT IN (SELECT...) and that does work. An outer join per >Arthur's suggestion is marginally faster. > >In fact I have now succeeded in the operation using the outer join method. > >John W. Colby >www.ColbyConsulting.com > >-----Original Message----- >From: dba-sqlserver-bounces at databaseadvisors.com >[mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Mike & >Doris Manning >Sent: Wednesday, September 22, 2004 9:52 AM >To: dba-sqlserver at databaseadvisors.com >Subject: RE: [dba-SQLServer] Violation of primary key constraint > > >Take a look at the NOT IN topic in BOL. > >Doris Manning >Database Administrator >Hargrove Inc. >www.hargroveinc.com > > >-----Original Message----- >From: dba-sqlserver-bounces at databaseadvisors.com >[mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. >Colby >Sent: Wednesday, September 22, 2004 8:56 AM >To: dba-sqlserver at databaseadvisors.com >Subject: RE: [dba-SQLServer] Violation of primary key constraint > > >Well I just discovered the <> operator in the Join in SQL Server. I think >this will solve my problem since I just back up and pull only the records >with a ZIP in field B where the PK <> PK in the destination table. NO idea >what implication this has for execution time though. > >John W. Colby >www.ColbyConsulting.com > >-----Original Message----- >From: dba-sqlserver-bounces at databaseadvisors.com >[mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. >Colby >Sent: Wednesday, September 22, 2004 8:48 AM >To: dba-sqlserver at databaseadvisors.com >Subject: [dba-SQLServer] Violation of primary key constraint > > >I built a table that I want to dump records in to. The source data is the >PK of my big table plus one of two different zip code fields. Two queries, >all the records - PKs plus ZIP - from any records with a Zip in field A. >The >second all the records - PK plus Zip - with any Zip in field B. Append >query A, append query B. In access, Query B would append all the records >where there was not a collision with the PKs already in the table. In SQL >Server the entire second query just fails because of a collision, giving me >a "Violation of primary key constraint, statement terminated". > >As Bill Cosby says in one of his wonderful acts, "brain damaged children". > >'Scuse me, I WANT the primary key constraint to prevent records from going >in but I also want those records without a violation to go in. So how do I >override this brain damaged child and tell it to accept those records that >do not violate the PK constraint? > >John W. Colby >www.ColbyConsulting.com > > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > > > > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > > > > >_______________________________________________ >dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > _________________________________________________________________ Designer Mail isn't just fun to send, it's fun to receive. Use special stationery, fonts and colors. http://join.msn.com/?pgmarket=en-ca&page=byoa/prem&xAPID=1994&DI=1034&SU=http://hotmail.com/enca&HL=Market_MSNIS_Taglines Start enjoying all the benefits of MSN? Premium right now and get the first two months FREE*. From jwcolby at colbyconsulting.com Thu Sep 23 16:08:19 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Thu, 23 Sep 2004 17:08:19 -0400 Subject: [dba-SQLServer] Re: "Data dictionary" In-Reply-To: <5.1.0.14.2.20040923104348.013c2d60@pop3.highstream.net> Message-ID: <002101c4a1b1$76d6abc0$e8dafea9@ColbyM6805> Hey, tell them to drop a quarter million my way will ya? I'll take a month off and dev the same thing for them. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Robert L. Stewart Sent: Thursday, September 23, 2004 11:57 AM To: dba-sqlserver at databaseadvisors.com Cc: jwcolby at colbyconsulting.com Subject: [dba-SQLServer] Re: "Data dictionary" John, Nothing that is free or cheap. The company I work for just dropped about 250,000 for a package to do something like this. Create a table to hold the data. 2 columns ColumnName and COlumnValue and CountOfValue then create a stored procedure with a parameter of @ColumnName for input create a SQL statement that will append the data into the table BTW, you could also use this table for the zip code count also. You will never be able to get good speed out of the other options because of the size of your DB. By running the SP and loading the data into a reporting table, you will be able to get extremely good performance for the reporting. Also, the reason for the count of each value is because, from my experience, they end up wanting to see how many of ones they consider a problem exist in the data. Robert At 09:15 AM 9/23/2004 -0500, you wrote: >Date: Thu, 23 Sep 2004 10:08:53 -0400 >From: "John W. Colby" >Subject: [dba-SQLServer] "Data dictionary" >To: dba-sqlserver at databaseadvisors.com >Message-ID: <000b01c4a176$def02730$e8dafea9 at ColbyM6805> >Content-Type: text/plain; charset=us-ascii > >My client wants what he refers to as a data dictionary. What he is >referring to is a list of all the DISTINCT values in the data for a >specific set of fields. Is there any (cheap / free) tool that can do >this? I need a table or spreadsheet, with the field name, then the >distinct values in that field. I can do this field by field by >building queries but if there is something quick and easy it sure would >help. > >John W. Colby _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Thu Sep 23 17:57:10 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Fri, 24 Sep 2004 08:57:10 +1000 Subject: [dba-SQLServer] Deletion based on Compound Distinct Selection In-Reply-To: <200409231830.OAA06121@comtech.comtechpst.com> Message-ID: <4153E166.29589.4469E88@lexacorp.com.pg> On 23 Sep 2004 at 14:34, Ron Moore wrote: > > WHERE COLUMN1,COLUMN2 IN ( > > SELECT DISTINCT COLUMN1,COLUMN2 > You can only use one value for an "In". You can prossibly concatentate the two values. If they are strings, try something like Where (Column1 + Column2) In Select Distinct Column1 + Column2..... If numeric, you will probably need to convert them to strings first. -- Stuart From listmaster at databaseadvisors.com Thu Sep 23 20:27:48 2004 From: listmaster at databaseadvisors.com (Bryan Carbonnell) Date: Thu, 23 Sep 2004 21:27:48 -0400 Subject: [dba-SQLServer] Administrivia - ADMIN Server Troubles Message-ID: <41533FD4.8127.698BF2@localhost> The mail server that is running DBA's mailing lists is in the midst of an identity crisis. It doesn't want to act like a mial server, which is why some mail is taking hours and days to deliver, as some of you have noticed recently. So I will need to take the server off-line this weekend. The server will be going off-line at around 9pm EDT (UTC +0400). I can't say for sure when it will be back on-line, but it will be before Monday Morning (UTC+0400) (Late morning for those of you in Europe and the UK) Hopefully this will resolve the problems and thanks for your patience while we work through this. From artful at rogers.com Fri Sep 24 06:31:19 2004 From: artful at rogers.com (Arthur Fuller) Date: Fri, 24 Sep 2004 07:31:19 -0400 Subject: [dba-SQLServer] Count of records with zip In-Reply-To: <000d01c4a009$d3d7c6a0$e8dafea9@ColbyM6805> Message-ID: <006601c4a22a$03165d60$6501a8c0@rock> SELECT COUNT(Customers.Zip), ZipCodes.Zip FROM ZipCodes INNER JOIN Customers ON ZipCodes.Zip = Customers.ZIP GROUP BY ZipCodes.Zip Should do it. Arthur -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Tuesday, September 21, 2004 2:36 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Count of records with zip I need what appears to me to be a crosstab, a COUNT of all records in a set of zip codes. In other words, I have a table of zips (about 100). I need a count of how many records in my nVLDB is in each zip code. Can anyone point me in the right direction in SQL Server? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From rmoore at comtechpst.com Fri Sep 24 13:11:28 2004 From: rmoore at comtechpst.com (Ron Moore) Date: Fri, 24 Sep 2004 14:11:28 -0400 Subject: [dba-SQLServer] Deletion based on Compound Distinct Selection In-Reply-To: <4153E166.29589.4469E88@lexacorp.com.pg> Message-ID: <200409241807.OAA11549@comtech.comtechpst.com> Stuart wrote: You can only use one value for an "In". You can prossibly concatentate the two values. If they are strings, try something like Where (Column1 + Column2) In Select Distinct Column1 + Column2..... If numeric, you will probably need to convert them to strings first. Stuart, Thanks for that. I didn't realize the "IN" could only accommodate 1 value. In the interim, I selected distinct (on 3 columns now) and put results in a temp table. Then I was able to isolate records from the other table based on a join to the temp table where the 3 columns matched. All is well now. Thanks Again, Ron From joconnell at indy.rr.com Fri Sep 24 16:03:43 2004 From: joconnell at indy.rr.com (Joseph O'Connell) Date: Fri, 24 Sep 2004 16:03:43 -0500 Subject: [dba-SQLServer] Problem concatening a string and a numeric Message-ID: <003301c4a279$fbf81aa0$6701a8c0@joe> A field in a table contains integer values. How do I create a SQL statement that will prefix the values with a string? For example, if the field values are 1 2 3 I wish to return abc1 abc2 abc3 "SELECT 'abc' + lngCompID AS ModifiedValue FROM dbo.tblCompany" generates an error message: "Syntax error converting the varchar value 'abc' to a column of data type int" "SELECT 'abc' + Cstr(lngCompID) AS ModifiedValue FROM dbo.tblCompany" generates an error message: "'Cstr' is not a recognized function name" Joe O'Connell From CMackin at quiznos.com Fri Sep 24 16:21:52 2004 From: CMackin at quiznos.com (Mackin, Christopher) Date: Fri, 24 Sep 2004 15:21:52 -0600 Subject: [dba-SQLServer] Problem concatening a string and a numeric Message-ID: Unlike Access, the conversion between Numeric and String values needs to be done explicitly in T-SQL. Use this: SELECT 'abc' + CAST(lngCompID AS VarChar(100))........... Chris Mackin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Joseph O'Connell Sent: Friday, September 24, 2004 3:04 PM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Problem concatening a string and a numeric A field in a table contains integer values. How do I create a SQL statement that will prefix the values with a string? For example, if the field values are 1 2 3 I wish to return abc1 abc2 abc3 "SELECT 'abc' + lngCompID AS ModifiedValue FROM dbo.tblCompany" generates an error message: "Syntax error converting the varchar value 'abc' to a column of data type int" "SELECT 'abc' + Cstr(lngCompID) AS ModifiedValue FROM dbo.tblCompany" generates an error message: "'Cstr' is not a recognized function name" Joe O'Connell _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From joconnell at indy.rr.com Fri Sep 24 16:36:14 2004 From: joconnell at indy.rr.com (Joseph O'Connell) Date: Fri, 24 Sep 2004 16:36:14 -0500 Subject: [dba-SQLServer] Problem concatening a string and a numeric Message-ID: <004301c4a27e$a04d0e40$6701a8c0@joe> Chris, That is just what I needed--it works perfectly. Thank you Joe -----Original Message----- From: Mackin, Christopher To: dba-sqlserver at databaseadvisors.com Date: Friday, September 24, 2004 4:26 PM Subject: RE: [dba-SQLServer] Problem concatening a string and a numeric |Unlike Access, the conversion between Numeric and String values needs to be done explicitly in T-SQL. | |Use this: | |SELECT 'abc' + CAST(lngCompID AS VarChar(100))........... | |Chris Mackin | | |-----Original Message----- |From: dba-sqlserver-bounces at databaseadvisors.com |[mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Joseph |O'Connell |Sent: Friday, September 24, 2004 3:04 PM |To: dba-sqlserver at databaseadvisors.com |Subject: [dba-SQLServer] Problem concatening a string and a numeric | | |A field in a table contains integer values. How do I create a SQL statement |that will prefix the values with a string? |For example, if the field values are | 1 | 2 | 3 | |I wish to return | abc1 | abc2 | abc3 | |"SELECT 'abc' + lngCompID AS ModifiedValue FROM dbo.tblCompany" |generates an error message: |"Syntax error converting the varchar value 'abc' to a column of data type |int" | |"SELECT 'abc' + Cstr(lngCompID) AS ModifiedValue FROM dbo.tblCompany" |generates an error message: |"'Cstr' is not a recognized function name" | | |Joe O'Connell | | |_______________________________________________ |dba-SQLServer mailing list |dba-SQLServer at databaseadvisors.com |http://databaseadvisors.com/mailman/listinfo/dba-sqlserver |http://www.databaseadvisors.com | |_______________________________________________ |dba-SQLServer mailing list |dba-SQLServer at databaseadvisors.com |http://databaseadvisors.com/mailman/listinfo/dba-sqlserver |http://www.databaseadvisors.com | From listmaster at databaseadvisors.com Sat Sep 25 07:11:41 2004 From: listmaster at databaseadvisors.com (Bryan Carbonnell) Date: Sat, 25 Sep 2004 08:11:41 -0400 Subject: [dba-SQLServer] Administrivia - Server Back Up Message-ID: <4155283D.12253.1D8604@localhost> Well, the mailserver and list software is back up and running. Hopefully our problems are all cleared up now. If not, please let me know at either listmaster at databaseadvisors.com AND carbonnb at sympatico.ca That way if dba's server is not working, then I will still be able to look at it. Thanks for your understanding and patience. -- Bryan Carbonnell - listmaster at databaseadvisors.com There's a fine line between genius and insanity. I have erased this line. From jwcolby at colbyconsulting.com Sat Sep 25 08:19:22 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sat, 25 Sep 2004 09:19:22 -0400 Subject: [dba-SQLServer] Administrivia - Server Back Up In-Reply-To: <4155283D.12253.1D8604@localhost> Message-ID: <006b01c4a302$493c7ee0$e8dafea9@ColbyM6805> Wow, fast service. Thanks for all the work you do on this! John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Bryan Carbonnell Sent: Saturday, September 25, 2004 8:12 AM To: administrivia at databaseadvisors.com Subject: [dba-SQLServer] Administrivia - Server Back Up Well, the mailserver and list software is back up and running. Hopefully our problems are all cleared up now. If not, please let me know at either listmaster at databaseadvisors.com AND carbonnb at sympatico.ca That way if dba's server is not working, then I will still be able to look at it. Thanks for your understanding and patience. -- Bryan Carbonnell - listmaster at databaseadvisors.com There's a fine line between genius and insanity. I have erased this line. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From john at winhaven.net Sat Sep 25 09:12:58 2004 From: john at winhaven.net (John Bartow) Date: Sat, 25 Sep 2004 09:12:58 -0500 Subject: [dba-SQLServer] Administrivia - Server Back Up In-Reply-To: <4155283D.12253.1D8604@localhost> Message-ID: Three cheers and a HUGE pat on the back for Bryan! He did this maintenance work in his personal free time! John Bartow, President Database Advisors, Inc. Email: mailto:president at databaseadvisors.com Website: http://www.databaseadvisors.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Bryan Carbonnell Sent: Saturday, September 25, 2004 7:12 AM To: administrivia at databaseadvisors.com Subject: [dba-SQLServer] Administrivia - Server Back Up Well, the mailserver and list software is back up and running. Hopefully our problems are all cleared up now. If not, please let me know at either listmaster at databaseadvisors.com AND carbonnb at sympatico.ca That way if dba's server is not working, then I will still be able to look at it. Thanks for your understanding and patience. -- Bryan Carbonnell - listmaster at databaseadvisors.com There's a fine line between genius and insanity. I have erased this line. _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 26 11:24:36 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 26 Sep 2004 12:24:36 -0400 Subject: [dba-SQLServer] Restore In-Reply-To: Message-ID: <000201c4a3e5$54dd3d00$e8dafea9@ColbyM6805> Will a SQL Server restore build the db containers or do they have to be present already? IOW if I lost everything, is a backup enough to get the db working on a different machine? John W. Colby www.ColbyConsulting.com From kens.programming at verizon.net Sun Sep 26 11:46:53 2004 From: kens.programming at verizon.net (Ken Stoker) Date: Sun, 26 Sep 2004 09:46:53 -0700 Subject: [dba-SQLServer] Restore In-Reply-To: <000201c4a3e5$54dd3d00$e8dafea9@ColbyM6805> Message-ID: <20040926164636.ZEEO8960.out008.verizon.net@enterprise> John, Yes, I create new databases all the time from backups. Select the restore task and then change the database name to your new database name. On the General tab, there is a restore set of option buttons with the options Database, Filegroups or Files, or From Device. Select From Device and use the dialogs to locate your backup file. Once you have selected a backup file, under the Options tab, you should see the paths for the mdf and ldf files. Change as needed. Then restore. Good luck Ken -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Sunday, September 26, 2004 9:25 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Restore Will a SQL Server restore build the db containers or do they have to be present already? IOW if I lost everything, is a backup enough to get the db working on a different machine? John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Sun Sep 26 11:49:00 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sun, 26 Sep 2004 09:49:00 -0700 Subject: [dba-SQLServer] Restore In-Reply-To: <000201c4a3e5$54dd3d00$e8dafea9@ColbyM6805> References: <000201c4a3e5$54dd3d00$e8dafea9@ColbyM6805> Message-ID: YES, it IS enough... On Sun, 26 Sep 2004 12:24:36 -0400, John W. Colby wrote: > Will a SQL Server restore build the db containers or do they have to be > present already? IOW if I lost everything, is a backup enough to get the db > working on a different machine? > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org From fhtapia at gmail.com Sun Sep 26 11:49:39 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sun, 26 Sep 2004 09:49:39 -0700 Subject: [dba-SQLServer] Restore In-Reply-To: References: <000201c4a3e5$54dd3d00$e8dafea9@ColbyM6805> Message-ID: oh, the only "REAL" thing to worry about is the desitnation of the actual files... the Paths must exists, or you must supply paths that DO exist On Sun, 26 Sep 2004 09:49:00 -0700, Francisco Tapia wrote: > YES, it IS enough... > > > > > On Sun, 26 Sep 2004 12:24:36 -0400, John W. Colby > wrote: > > Will a SQL Server restore build the db containers or do they have to be > > present already? IOW if I lost everything, is a backup enough to get the db > > working on a different machine? > > > > John W. Colby > > www.ColbyConsulting.com > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > -- > -Francisco > http://ft316db.VOTEorNOT.org > -- -Francisco http://ft316db.VOTEorNOT.org From jwcolby at colbyconsulting.com Sun Sep 26 13:12:29 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 26 Sep 2004 14:12:29 -0400 Subject: [dba-SQLServer] Registering SQL Servers In-Reply-To: Message-ID: <000301c4a3f4$65c73bc0$e8dafea9@ColbyM6805> I have 3 computers with SQL Server installed, full install. I want to register the databases on each machine on each other machine so that I can see the three databases from EM regardless of which machine I am on. I will admit up front that I did the install under different user names, i.e. under the default Administrator on one and under jcolby on the other two (I think). I am using windows authentication. I have a "reasonably strong" 10 character pneumonic password (first character of 10 words, but no numbers or special characters). I have created identical Administrator / password accounts on all three machines. All three machines can see shares from each other without entering any passwords / user names (as long as I'm logged in as administrator on that machine). I have logged off and back on as the administrator using the identical password on each machine. Neo1 can see Local and can see (and register) Soltek, but if I try and register Neo2 I can see the server but get ""Neo2 - Login failed for 'Neo2\Guest'. Neo2 can see Local and can see (and register) Soltek, but if I try and register Neo1 I can see the server but get the same "login failed for Neo1\Guest" error. Soltek can see Local (although it is CALLED SOLTEK!!! It can see Neo1 and Neo2 but cannot register them, getting the same "login failed" message. What is going on here. How do I sync them up so that all three can see and register the other two. Also what is the difference between the solid green circle with a white arrow (in the server group tree) and the white circle with the green arrow? Neo1 shows green with white arrows, Neo2 shows white with green arrows, and Soltek shows its own name instead of Local and a solid green with white arrow. I believe Soltek was showing a Local but I couldn't connect to it, so I deleted it and re-registered it to itself which is why I am not seeing Local. Can anyone briefly and succinctly explain what is happening during this registration process, and how to get where AI want to go? TIA, John W. Colby www.ColbyConsulting.com From michael at ddisolutions.com.au Sun Sep 26 20:05:10 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Mon, 27 Sep 2004 11:05:10 +1000 Subject: [dba-SQLServer] Registering SQL Servers Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011BC2@ddi-01.DDI.local> John, I think you have several options... Give yourself Domain Admin privs. Alternatively create a another single a/c and give it local admin on each server. Register using sa for each server. To remotely admin SQL you need to have Local Admin on each box and be logged on as that account. I don't think creating 3 a/c's basically the same counts! regards Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Monday, 27 September 2004 4:12 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Registering SQL Servers I have 3 computers with SQL Server installed, full install. I want to register the databases on each machine on each other machine so that I can see the three databases from EM regardless of which machine I am on. I will admit up front that I did the install under different user names, i.e. under the default Administrator on one and under jcolby on the other two (I think). I am using windows authentication. I have a "reasonably strong" 10 character pneumonic password (first character of 10 words, but no numbers or special characters). I have created identical Administrator / password accounts on all three machines. All three machines can see shares from each other without entering any passwords / user names (as long as I'm logged in as administrator on that machine). I have logged off and back on as the administrator using the identical password on each machine. Neo1 can see Local and can see (and register) Soltek, but if I try and register Neo2 I can see the server but get ""Neo2 - Login failed for 'Neo2\Guest'. Neo2 can see Local and can see (and register) Soltek, but if I try and register Neo1 I can see the server but get the same "login failed for Neo1\Guest" error. Soltek can see Local (although it is CALLED SOLTEK!!! It can see Neo1 and Neo2 but cannot register them, getting the same "login failed" message. What is going on here. How do I sync them up so that all three can see and register the other two. Also what is the difference between the solid green circle with a white arrow (in the server group tree) and the white circle with the green arrow? Neo1 shows green with white arrows, Neo2 shows white with green arrows, and Soltek shows its own name instead of Local and a solid green with white arrow. I believe Soltek was showing a Local but I couldn't connect to it, so I deleted it and re-registered it to itself which is why I am not seeing Local. Can anyone briefly and succinctly explain what is happening during this registration process, and how to get where AI want to go? TIA, John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Sun Sep 26 20:35:30 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 26 Sep 2004 21:35:30 -0400 Subject: [dba-SQLServer] Registering SQL Servers In-Reply-To: <59A61174B1F5B54B97FD4ADDE71E7D01011BC2@ddi-01.DDI.local> Message-ID: <000601c4a432$4a2a1b10$e8dafea9@ColbyM6805> Michael, I suppose I should have specified. This is my home office. I only have a workgroup. I am trying to get SBS 2003 or even Windows 2003 set up but the new computers I bought don't have drivers for 2003 yet so... Workgroup only, no domain. Building an identical SA for each machine and logging in as that user allows the file sharing to work without having to supply any passwords etc. It appears to windows somehow that I am logged on to every machine as the same user thus it doesn't ask me for a password at least to get at mapped drives and shares. SQL is set up to use windows authentication. If I can use the maps and shares, then am I not a trusted log in? If I am then why is the process of registering accepting the user as a valid user? It is supposed to be asking Windows if I am valid, and Windows is accepting me as valid at least for the purposes of shares and maps. This is stuff I never really needed to know, and I am NOT a notwork administrator. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Sunday, September 26, 2004 9:05 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Registering SQL Servers John, I think you have several options... Give yourself Domain Admin privs. Alternatively create a another single a/c and give it local admin on each server. Register using sa for each server. To remotely admin SQL you need to have Local Admin on each box and be logged on as that account. I don't think creating 3 a/c's basically the same counts! regards Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Monday, 27 September 2004 4:12 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Registering SQL Servers I have 3 computers with SQL Server installed, full install. I want to register the databases on each machine on each other machine so that I can see the three databases from EM regardless of which machine I am on. I will admit up front that I did the install under different user names, i.e. under the default Administrator on one and under jcolby on the other two (I think). I am using windows authentication. I have a "reasonably strong" 10 character pneumonic password (first character of 10 words, but no numbers or special characters). I have created identical Administrator / password accounts on all three machines. All three machines can see shares from each other without entering any passwords / user names (as long as I'm logged in as administrator on that machine). I have logged off and back on as the administrator using the identical password on each machine. Neo1 can see Local and can see (and register) Soltek, but if I try and register Neo2 I can see the server but get ""Neo2 - Login failed for 'Neo2\Guest'. Neo2 can see Local and can see (and register) Soltek, but if I try and register Neo1 I can see the server but get the same "login failed for Neo1\Guest" error. Soltek can see Local (although it is CALLED SOLTEK!!! It can see Neo1 and Neo2 but cannot register them, getting the same "login failed" message. What is going on here. How do I sync them up so that all three can see and register the other two. Also what is the difference between the solid green circle with a white arrow (in the server group tree) and the white circle with the green arrow? Neo1 shows green with white arrows, Neo2 shows white with green arrows, and Soltek shows its own name instead of Local and a solid green with white arrow. I believe Soltek was showing a Local but I couldn't connect to it, so I deleted it and re-registered it to itself which is why I am not seeing Local. Can anyone briefly and succinctly explain what is happening during this registration process, and how to get where AI want to go? TIA, John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From michael at ddisolutions.com.au Sun Sep 26 20:47:10 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Mon, 27 Sep 2004 11:47:10 +1000 Subject: [dba-SQLServer] Registering SQL Servers Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011BC3@ddi-01.DDI.local> <> LOL... Me either... I'm not familiar with Workgroups for this type of stuff. However what I'd do to get me going is change each install to allow mixed mode. Right click on the server in EM and select properties --> security --> SQL + Windows You will need to do it locally, of course. Now you should be able to register using sa. The sa pwd doesn't have to be the same for each server. HTH Michael M Michael, I suppose I should have specified. This is my home office. I only have a workgroup. I am trying to get SBS 2003 or even Windows 2003 set up but the new computers I bought don't have drivers for 2003 yet so... Workgroup only, no domain. Building an identical SA for each machine and logging in as that user allows the file sharing to work without having to supply any passwords etc. It appears to windows somehow that I am logged on to every machine as the same user thus it doesn't ask me for a password at least to get at mapped drives and shares. SQL is set up to use windows authentication. If I can use the maps and shares, then am I not a trusted log in? If I am then why is the process of registering accepting the user as a valid user? It is supposed to be asking Windows if I am valid, and Windows is accepting me as valid at least for the purposes of shares and maps. This is stuff I never really needed to know, and I am NOT a notwork administrator. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael Maddison Sent: Sunday, September 26, 2004 9:05 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Registering SQL Servers John, I think you have several options... Give yourself Domain Admin privs. Alternatively create a another single a/c and give it local admin on each server. Register using sa for each server. To remotely admin SQL you need to have Local Admin on each box and be logged on as that account. I don't think creating 3 a/c's basically the same counts! regards Michael M -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Monday, 27 September 2004 4:12 AM To: dba-sqlserver at databaseadvisors.com Subject: [dba-SQLServer] Registering SQL Servers I have 3 computers with SQL Server installed, full install. I want to register the databases on each machine on each other machine so that I can see the three databases from EM regardless of which machine I am on. I will admit up front that I did the install under different user names, i.e. under the default Administrator on one and under jcolby on the other two (I think). I am using windows authentication. I have a "reasonably strong" 10 character pneumonic password (first character of 10 words, but no numbers or special characters). I have created identical Administrator / password accounts on all three machines. All three machines can see shares from each other without entering any passwords / user names (as long as I'm logged in as administrator on that machine). I have logged off and back on as the administrator using the identical password on each machine. Neo1 can see Local and can see (and register) Soltek, but if I try and register Neo2 I can see the server but get ""Neo2 - Login failed for 'Neo2\Guest'. Neo2 can see Local and can see (and register) Soltek, but if I try and register Neo1 I can see the server but get the same "login failed for Neo1\Guest" error. Soltek can see Local (although it is CALLED SOLTEK!!! It can see Neo1 and Neo2 but cannot register them, getting the same "login failed" message. What is going on here. How do I sync them up so that all three can see and register the other two. Also what is the difference between the solid green circle with a white arrow (in the server group tree) and the white circle with the green arrow? Neo1 shows green with white arrows, Neo2 shows white with green arrows, and Soltek shows its own name instead of Local and a solid green with white arrow. I believe Soltek was showing a Local but I couldn't connect to it, so I deleted it and re-registered it to itself which is why I am not seeing Local. Can anyone briefly and succinctly explain what is happening during this registration process, and how to get where AI want to go? TIA, John W. Colby www.ColbyConsulting.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From michael at ddisolutions.com.au Sun Sep 26 20:49:20 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Mon, 27 Sep 2004 11:49:20 +1000 Subject: [dba-SQLServer] Restore Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011BC4@ddi-01.DDI.local> And watchout for the db name combo, its not intuitive but you need to type in the db name otherwise it will try and overwrite the 1st name in the list. (It will fail thank god ;-))) cheers Michael M oh, the only "REAL" thing to worry about is the desitnation of the actual files... the Paths must exists, or you must supply paths that DO exist On Sun, 26 Sep 2004 09:49:00 -0700, Francisco Tapia wrote: > YES, it IS enough... > > > > > On Sun, 26 Sep 2004 12:24:36 -0400, John W. Colby > wrote: > > Will a SQL Server restore build the db containers or do they have to > > be present already? IOW if I lost everything, is a backup enough to > > get the db working on a different machine? > > > > John W. Colby > > www.ColbyConsulting.com > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > -- > -Francisco > http://ft316db.VOTEorNOT.org > -- -Francisco http://ft316db.VOTEorNOT.org _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Sun Sep 26 22:08:20 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Sun, 26 Sep 2004 20:08:20 -0700 Subject: [dba-SQLServer] Registering SQL Servers In-Reply-To: <59A61174B1F5B54B97FD4ADDE71E7D01011BC3@ddi-01.DDI.local> References: <59A61174B1F5B54B97FD4ADDE71E7D01011BC3@ddi-01.DDI.local> Message-ID: Create a Windows UID/PWD identical to the remote machine to all 3 servers and make him Admin... Windows Authentication will see the uid and pwd as the same and allow you full admin rights to each box. You can now register w/ Windows authentication in EM. On Mon, 27 Sep 2004 11:47:10 +1000, Michael Maddison wrote: > < administrator.>> > > LOL... Me either... > I'm not familiar with Workgroups for this type of stuff. > However what I'd do to get me going is change each install to allow > mixed mode. Right click on the server in EM > and select properties --> security --> SQL + Windows > You will need to do it locally, of course. > > Now you should be able to register using sa. The sa pwd doesn't have to > be the same for each server. > > HTH > > Michael M > > > > Michael, > > I suppose I should have specified. This is my home office. I only have > a workgroup. I am trying to get SBS 2003 or even Windows 2003 set up > but the new computers I bought don't have drivers for 2003 yet so... > Workgroup only, no domain. > > Building an identical SA for each machine and logging in as that user > allows the file sharing to work without having to supply any passwords > etc. It appears to windows somehow that I am logged on to every machine > as the same user thus it doesn't ask me for a password at least to get > at mapped drives and shares. > > SQL is set up to use windows authentication. If I can use the maps and > shares, then am I not a trusted log in? If I am then why is the process > of registering accepting the user as a valid user? It is supposed to be > asking Windows if I am valid, and Windows is accepting me as valid at > least for the purposes of shares and maps. > > This is stuff I never really needed to know, and I am NOT a notwork > administrator. > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Michael > Maddison > Sent: Sunday, September 26, 2004 9:05 PM > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] Registering SQL Servers > > John, > > I think you have several options... > > Give yourself Domain Admin privs. Alternatively create a another single > a/c and give it local admin on each server. > > Register using sa for each server. > > To remotely admin SQL you need to have Local Admin on each box and be > logged on as that account. I don't think creating 3 a/c's basically the > same counts! > > regards > > Michael M > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John W. > Colby > Sent: Monday, 27 September 2004 4:12 AM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Registering SQL Servers > > I have 3 computers with SQL Server installed, full install. I want to > register the databases on each machine on each other machine so that I > can see the three databases from EM regardless of which machine I am on. > > I will admit up front that I did the install under different user names, > i.e. under the default Administrator on one and under jcolby on the > other two (I think). > > I am using windows authentication. I have a "reasonably strong" 10 > character pneumonic password (first character of 10 words, but no > numbers or special characters). I have created identical Administrator > / password accounts on all three machines. All three machines can see > shares from each other without entering any passwords / user names (as > long as I'm logged in as administrator on that machine). > > I have logged off and back on as the administrator using the identical > password on each machine. > > Neo1 can see Local and can see (and register) Soltek, but if I try and > register Neo2 I can see the server but get ""Neo2 - Login failed for > 'Neo2\Guest'. Neo2 can see Local and can see (and register) Soltek, but > if I try and register Neo1 I can see the server but get the same "login > failed for Neo1\Guest" error. Soltek can see Local (although it is > CALLED SOLTEK!!! > It can see Neo1 and Neo2 but cannot register them, getting the same > "login failed" message. > > What is going on here. How do I sync them up so that all three can see > and register the other two. > > Also what is the difference between the solid green circle with a white > arrow (in the server group tree) and the white circle with the green > arrow? > Neo1 shows green with white arrows, Neo2 shows white with green arrows, > and Soltek shows its own name instead of Local and a solid green with > white arrow. > > I believe Soltek was showing a Local but I couldn't connect to it, so I > deleted it and re-registered it to itself which is why I am not seeing > Local. > > Can anyone briefly and succinctly explain what is happening during this > registration process, and how to get where AI want to go? > > TIA, > > John W. Colby > www.ColbyConsulting.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org From jwcolby at colbyconsulting.com Sun Sep 26 22:54:08 2004 From: jwcolby at colbyconsulting.com (John W. Colby) Date: Sun, 26 Sep 2004 23:54:08 -0400 Subject: [dba-SQLServer] Registering SQL Servers In-Reply-To: Message-ID: <000701c4a445$a7850d70$e8dafea9@ColbyM6805> I did that (see my original post AT THE BOTTOM ;-), no joy. John W. Colby www.ColbyConsulting.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Francisco Tapia Sent: Sunday, September 26, 2004 11:08 PM To: dba-sqlserver at databaseadvisors.com Subject: Re: [dba-SQLServer] Registering SQL Servers Create a Windows UID/PWD identical to the remote machine to all 3 servers and make him Admin... Windows Authentication will see the uid and pwd as the same and allow you full admin rights to each box. You can now register w/ Windows authentication in EM. On Mon, 27 Sep 2004 11:47:10 +1000, Michael Maddison wrote: > < administrator.>> > > LOL... Me either... > I'm not familiar with Workgroups for this type of stuff. However what > I'd do to get me going is change each install to allow mixed mode. > Right click on the server in EM and select properties --> security > --> SQL + Windows You will need to do it locally, of course. > > Now you should be able to register using sa. The sa pwd doesn't have > to be the same for each server. > > HTH > > Michael M > > > > Michael, > > I suppose I should have specified. This is my home office. I only > have a workgroup. I am trying to get SBS 2003 or even Windows 2003 > set up but the new computers I bought don't have drivers for 2003 yet > so... Workgroup only, no domain. > > Building an identical SA for each machine and logging in as that user > allows the file sharing to work without having to supply any passwords > etc. It appears to windows somehow that I am logged on to every > machine as the same user thus it doesn't ask me for a password at > least to get at mapped drives and shares. > > SQL is set up to use windows authentication. If I can use the maps > and shares, then am I not a trusted log in? If I am then why is the > process of registering accepting the user as a valid user? It is > supposed to be asking Windows if I am valid, and Windows is accepting > me as valid at least for the purposes of shares and maps. > > This is stuff I never really needed to know, and I am NOT a notwork > administrator. > > John W. Colby > www.ColbyConsulting.com > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of > Michael Maddison > Sent: Sunday, September 26, 2004 9:05 PM > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] Registering SQL Servers > > John, > > I think you have several options... > > Give yourself Domain Admin privs. Alternatively create a another > single a/c and give it local admin on each server. > > Register using sa for each server. > > To remotely admin SQL you need to have Local Admin on each box and be > logged on as that account. I don't think creating 3 a/c's basically > the same counts! > > regards > > Michael M > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of John > W. Colby > Sent: Monday, 27 September 2004 4:12 AM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Registering SQL Servers > > I have 3 computers with SQL Server installed, full install. I want to > register the databases on each machine on each other machine so that I > can see the three databases from EM regardless of which machine I am > on. > > I will admit up front that I did the install under different user > names, i.e. under the default Administrator on one and under jcolby on > the other two (I think). > > I am using windows authentication. I have a "reasonably strong" 10 > character pneumonic password (first character of 10 words, but no > numbers or special characters). I have created identical > Administrator / password accounts on all three machines. All three > machines can see shares from each other without entering any passwords > / user names (as long as I'm logged in as administrator on that > machine). > > I have logged off and back on as the administrator using the identical > password on each machine. > > Neo1 can see Local and can see (and register) Soltek, but if I try and > register Neo2 I can see the server but get ""Neo2 - Login failed for > 'Neo2\Guest'. Neo2 can see Local and can see (and register) Soltek, > but if I try and register Neo1 I can see the server but get the same > "login failed for Neo1\Guest" error. Soltek can see Local (although it > is CALLED SOLTEK!!! It can see Neo1 and Neo2 but cannot register them, > getting the same "login failed" message. > > What is going on here. How do I sync them up so that all three can > see and register the other two. > > Also what is the difference between the solid green circle with a > white arrow (in the server group tree) and the white circle with the > green arrow? Neo1 shows green with white arrows, Neo2 shows white with > green arrows, and Soltek shows its own name instead of Local and a > solid green with white arrow. > > I believe Soltek was showing a Local but I couldn't connect to it, so > I deleted it and re-registered it to itself which is why I am not > seeing Local. > > Can anyone briefly and succinctly explain what is happening during > this registration process, and how to get where AI want to go? > > TIA, > > John W. Colby > www.ColbyConsulting.com From HARVEYF1 at WESTAT.com Mon Sep 27 12:39:09 2004 From: HARVEYF1 at WESTAT.com (Francis Harvey) Date: Mon, 27 Sep 2004 13:39:09 -0400 Subject: [dba-SQLServer] Registering SQL Servers Message-ID: <446DDE75CFC7E1438061462F85557B0F0481EAA3@remail2.westat.com> John, I was looking for a posting with something similar to the fix that was recommended to me for our corporate network. In this post, the item on changing the local security policy, "Access this computer from the network" is where I had to add a coworkers domain name (WFW): http://groups.google.com/groups?q=%22SQL+Server%22+%22login+failed+for%22+lo cal+security+policy&hl=en&lr=&ie=UTF-8&selm=1bOkWA4XCHA.1576%40cpmsftngxa07& rnum=8 There is also this one talking about some sort of XP specific setting which might also apply in your case (WFW): http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=afe2f3eb.0309111141. 6c65b567%40posting.google.com Francis R Harvey III WB 303, (301)294-3952 harveyf1 at westat.com > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf > Of John W. Colby > Sent: Sunday, September 26, 2004 2:12 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Registering SQL Servers > > > I have 3 computers with SQL Server installed, full install. I want to > register the databases on each machine on each other machine > so that I can > see the three databases from EM regardless of which machine I am on. > > I will admit up front that I did the install under different > user names, > i.e. under the default Administrator on one and under jcolby > on the other > two (I think). > > I am using windows authentication. I have a "reasonably strong" 10 > character pneumonic password (first character of 10 words, > but no numbers or > special characters). I have created identical Administrator > / password > accounts on all three machines. All three machines can see > shares from each > other without entering any passwords / user names (as long as > I'm logged in > as administrator on that machine). > > I have logged off and back on as the administrator using the identical > password on each machine. > > Neo1 can see Local and can see (and register) Soltek, but if I try and > register Neo2 I can see the server but get ""Neo2 - Login failed for > 'Neo2\Guest'. > Neo2 can see Local and can see (and register) Soltek, but if I try and > register Neo1 I can see the server but get the same "login failed for > Neo1\Guest" error. > Soltek can see Local (although it is CALLED SOLTEK!!! It can > see Neo1 and > Neo2 but cannot register them, getting the same "login > failed" message. > > What is going on here. How do I sync them up so that all > three can see and > register the other two. > > Also what is the difference between the solid green circle > with a white > arrow (in the server group tree) and the white circle with > the green arrow? > Neo1 shows green with white arrows, Neo2 shows white with > green arrows, and > Soltek shows its own name instead of Local and a solid green > with white > arrow. > > I believe Soltek was showing a Local but I couldn't connect > to it, so I > deleted it and re-registered it to itself which is why I am not seeing > Local. > > Can anyone briefly and succinctly explain what is happening > during this > registration process, and how to get where AI want to go? > > TIA, > > John W. Colby > www.ColbyConsulting.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From JColby at dispec.com Mon Sep 27 14:04:17 2004 From: JColby at dispec.com (Colby, John) Date: Mon, 27 Sep 2004 15:04:17 -0400 Subject: [dba-SQLServer] Registering SQL Servers Message-ID: <05C61C52D7CAD211A7830008C7DF6F1079BDFB@DISABILITYINS01> Thanks, I'll read up on it. John W. Colby The DIS Database Guy -----Original Message----- From: Francis Harvey [mailto:HARVEYF1 at westat.com] Sent: Monday, September 27, 2004 1:39 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: RE: [dba-SQLServer] Registering SQL Servers John, I was looking for a posting with something similar to the fix that was recommended to me for our corporate network. In this post, the item on changing the local security policy, "Access this computer from the network" is where I had to add a coworkers domain name (WFW): http://groups.google.com/groups?q=%22SQL+Server%22+%22login+failed+for%22+lo cal+security+policy&hl=en&lr=&ie=UTF-8&selm=1bOkWA4XCHA.1576%40cpmsftngxa07& rnum=8 There is also this one talking about some sort of XP specific setting which might also apply in your case (WFW): http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=afe2f3eb.0309111141. 6c65b567%40posting.google.com Francis R Harvey III WB 303, (301)294-3952 harveyf1 at westat.com > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf > Of John W. Colby > Sent: Sunday, September 26, 2004 2:12 PM > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] Registering SQL Servers > > > I have 3 computers with SQL Server installed, full install. I want to > register the databases on each machine on each other machine > so that I can > see the three databases from EM regardless of which machine I am on. > > I will admit up front that I did the install under different > user names, > i.e. under the default Administrator on one and under jcolby > on the other > two (I think). > > I am using windows authentication. I have a "reasonably strong" 10 > character pneumonic password (first character of 10 words, > but no numbers or > special characters). I have created identical Administrator > / password > accounts on all three machines. All three machines can see > shares from each > other without entering any passwords / user names (as long as > I'm logged in > as administrator on that machine). > > I have logged off and back on as the administrator using the identical > password on each machine. > > Neo1 can see Local and can see (and register) Soltek, but if I try and > register Neo2 I can see the server but get ""Neo2 - Login failed for > 'Neo2\Guest'. > Neo2 can see Local and can see (and register) Soltek, but if I try and > register Neo1 I can see the server but get the same "login failed for > Neo1\Guest" error. > Soltek can see Local (although it is CALLED SOLTEK!!! It can > see Neo1 and > Neo2 but cannot register them, getting the same "login > failed" message. > > What is going on here. How do I sync them up so that all > three can see and > register the other two. > > Also what is the difference between the solid green circle > with a white > arrow (in the server group tree) and the white circle with > the green arrow? > Neo1 shows green with white arrows, Neo2 shows white with > green arrows, and > Soltek shows its own name instead of Local and a solid green > with white > arrow. > > I believe Soltek was showing a Local but I couldn't connect > to it, so I > deleted it and re-registered it to itself which is why I am not seeing > Local. > > Can anyone briefly and succinctly explain what is happening > during this > registration process, and how to get where AI want to go? > > TIA, > > John W. Colby > www.ColbyConsulting.com > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Mon Sep 27 17:42:44 2004 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Tue, 28 Sep 2004 08:42:44 +1000 Subject: [dba-SQLServer] Registering SQL Servers In-Reply-To: <446DDE75CFC7E1438061462F85557B0F0481EAA3@remail2.westat.com> Message-ID: <41592404.20322.18D2BACA@lexacorp.com.pg> On 27 Sep 2004 at 13:39, Francis Harvey wrote: > > There is also this one talking about some sort of XP specific setting > which might also apply in your case (WFW): > http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=afe2f3eb.0309111141. > 6c65b567%40posting.google.com > An absolute classic. What a bizarre change to make to the network authentification system which had worked fine for the last 10 years or so. Ask me again why I don't use XP :-) -- Stuart From JFK at puj.edu.co Wed Sep 29 13:29:18 2004 From: JFK at puj.edu.co (Julian Felipe Castrillon Trejos) Date: Wed, 29 Sep 2004 13:29:18 -0500 Subject: [dba-SQLServer] sql server triggers Message-ID: i have a question about sql triggers. i have 2 different servers and i want a triggers to execute in a database in the serverA and update a database in the serverB how can i do this?? is this possible?? thanks for your help. --------------------------------------------- Julian Castrillon T Ingeniero de sistemas y computacion Analista, IDA Ltda Ingenieria y desarrollo aplicados --------------------------------------------- Jesus save, but only Buda made incremental backups From CMackin at quiznos.com Wed Sep 29 15:35:30 2004 From: CMackin at quiznos.com (Mackin, Christopher) Date: Wed, 29 Sep 2004 14:35:30 -0600 Subject: [dba-SQLServer] JOIN Issue with Databases Message-ID: Hi, I've got 2 business applications that use SQL Server 2000 databases, one is: COLLATE SQL_Latin1_General_CP850_BIN the other is: COLLATE SQL_Latin1_General_CP1_CI_AS Trying to join data from tables in each database on a PO field (Varchar(10)) results in the error: Cannot resolve collation conflict for equal to operation. Does anyone know a workaround for this? The databases are required to be Collated as is and I've tried modifying the join fields via Cast INNER JOIN ({field} AS VarChar(10)) = ({field} AS VarChar(10)) But this results in the same error. Thanks, Chris Mackin From michael at ddisolutions.com.au Wed Sep 29 19:34:25 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Thu, 30 Sep 2004 10:34:25 +1000 Subject: [dba-SQLServer] JOIN Issue with Databases Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011BE6@ddi-01.DDI.local> Something like this... from BOL The predicate in the following query is evaluated in collation greek_ci_as because the right expression has the explicit label, which takes precedence over the implicit label of the right expression: SELECT * FROM TestTab WHERE GreekCol = LatinCol COLLATE greek_ci_as Looks like a real PITA We had issues with this once when for some unknown reason the default collations on 2 of our servers where out of whack. We were lucky that we could blow 1 away and rebuild with correct collation. cheers Michael M Hi, I've got 2 business applications that use SQL Server 2000 databases, one is: COLLATE SQL_Latin1_General_CP850_BIN the other is: COLLATE SQL_Latin1_General_CP1_CI_AS Trying to join data from tables in each database on a PO field (Varchar(10)) results in the error: Cannot resolve collation conflict for equal to operation. Does anyone know a workaround for this? The databases are required to be Collated as is and I've tried modifying the join fields via Cast INNER JOIN ({field} AS VarChar(10)) = ({field} AS VarChar(10)) But this results in the same error. Thanks, Chris Mackin _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From CMackin at quiznos.com Thu Sep 30 09:00:46 2004 From: CMackin at quiznos.com (Mackin, Christopher) Date: Thu, 30 Sep 2004 08:00:46 -0600 Subject: [dba-SQLServer] JOIN Issue with Databases Message-ID: Michael, Thanks for the resposne, I did see this just after I sent my note and oddly enough, the example Microsoft gives works as described but when I tried to use COLLATE SQL_Latin1_General_CP1_CI_AS on the appropriate field in either an INNER JOIN or the WHERE clause it returned an error stating that Collate was invalid. THe workaround I used was to create a temp table collated appropriately, then dump everything from the source table into the newly collated temp and then join on that. As it's a process that will run maybe once a day and with probably less than 100 records each day it should be fine. -Chris Mackin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Michael Maddison Sent: Wednesday, September 29, 2004 6:34 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] JOIN Issue with Databases Something like this... from BOL The predicate in the following query is evaluated in collation greek_ci_as because the right expression has the explicit label, which takes precedence over the implicit label of the right expression: SELECT * FROM TestTab WHERE GreekCol = LatinCol COLLATE greek_ci_as Looks like a real PITA We had issues with this once when for some unknown reason the default collations on 2 of our servers where out of whack. We were lucky that we could blow 1 away and rebuild with correct collation. cheers Michael M Hi, I've got 2 business applications that use SQL Server 2000 databases, one is: COLLATE SQL_Latin1_General_CP850_BIN the other is: COLLATE SQL_Latin1_General_CP1_CI_AS Trying to join data from tables in each database on a PO field (Varchar(10)) results in the error: Cannot resolve collation conflict for equal to operation. Does anyone know a workaround for this? The databases are required to be Collated as is and I've tried modifying the join fields via Cast INNER JOIN ({field} AS VarChar(10)) = ({field} AS VarChar(10)) But this results in the same error. Thanks, Chris Mackin _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From Susan.Klos at fldoe.org Thu Sep 30 14:34:16 2004 From: Susan.Klos at fldoe.org (Klos, Susan) Date: Thu, 30 Sep 2004 15:34:16 -0400 Subject: [dba-SQLServer] Union query Message-ID: <01B619CB8F6C8C478EDAC39191AEC51EE738AD@DOESEFPEML02.EUS.FLDOE.INT> Please be kind. I am doing this from an Access viewpoint and am having trouble making a transition. I have created a union query: Select * >From dbo.schoolEnrollmentNumberselemonly (this is a view) Union all Select * >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) Now I need to be able to call this query in another view. Is this possible? How? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 From mikedorism at adelphia.net Thu Sep 30 14:48:13 2004 From: mikedorism at adelphia.net (Mike & Doris Manning) Date: Thu, 30 Sep 2004 15:48:13 -0400 Subject: [dba-SQLServer] Union query In-Reply-To: <01B619CB8F6C8C478EDAC39191AEC51EE738AD@DOESEFPEML02.EUS.FLDOE.INT> Message-ID: <000001c4a726$6c4b5a50$060aa845@hargrove.internal> SELECT * FROM dbo.Unionquery Doris Manning Database Administrator Hargrove Inc. www.hargroveinc.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Klos, Susan Sent: Thursday, September 30, 2004 3:34 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: [dba-SQLServer] Union query Please be kind. I am doing this from an Access viewpoint and am having trouble making a transition. I have created a union query: Select * >From dbo.schoolEnrollmentNumberselemonly (this is a view) Union all Select * >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) Now I need to be able to call this query in another view. Is this possible? How? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From CMackin at quiznos.com Thu Sep 30 14:48:58 2004 From: CMackin at quiznos.com (Mackin, Christopher) Date: Thu, 30 Sep 2004 13:48:58 -0600 Subject: [dba-SQLServer] Union query Message-ID: Certainly, Just use: Create View dbo.MyView AS Select * From dbo.schoolEnrollmentNumberselemonly Union all Select * From dbo.SchoolEnrollmentNumberselemcombo Then you can call dbo.MyView from whereever -Chris Mackin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Klos, Susan Sent: Thursday, September 30, 2004 1:34 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: [dba-SQLServer] Union query Please be kind. I am doing this from an Access viewpoint and am having trouble making a transition. I have created a union query: Select * >From dbo.schoolEnrollmentNumberselemonly (this is a view) Union all Select * >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) Now I need to be able to call this query in another view. Is this possible? How? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From Susan.Klos at fldoe.org Thu Sep 30 14:53:57 2004 From: Susan.Klos at fldoe.org (Klos, Susan) Date: Thu, 30 Sep 2004 15:53:57 -0400 Subject: [dba-SQLServer] Union query Message-ID: <01B619CB8F6C8C478EDAC39191AEC51EE738AF@DOESEFPEML02.EUS.FLDOE.INT> That makes sense and I was going to do that, except that when I create the query in SQL Query Analyzer it saves on my hard drive not on the SQL Server. How do I get it to save as dbo.Unionquery and not as Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 -----Original Message----- From: Mike & Doris Manning [mailto:mikedorism at adelphia.net] Sent: Thursday, September 30, 2004 3:48 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Union query SELECT * FROM dbo.Unionquery Doris Manning Database Administrator Hargrove Inc. www.hargroveinc.com -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of Klos, Susan Sent: Thursday, September 30, 2004 3:34 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: [dba-SQLServer] Union query Please be kind. I am doing this from an Access viewpoint and am having trouble making a transition. I have created a union query: Select * >From dbo.schoolEnrollmentNumberselemonly (this is a view) Union all Select * >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) Now I need to be able to call this query in another view. Is this possible? How? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From Susan.Klos at fldoe.org Thu Sep 30 14:55:09 2004 From: Susan.Klos at fldoe.org (Klos, Susan) Date: Thu, 30 Sep 2004 15:55:09 -0400 Subject: [dba-SQLServer] Union query Message-ID: <01B619CB8F6C8C478EDAC39191AEC51EE738B0@DOESEFPEML02.EUS.FLDOE.INT> OK. Where do I write this code? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 -----Original Message----- From: Mackin, Christopher [mailto:CMackin at quiznos.com] Sent: Thursday, September 30, 2004 3:49 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Union query Certainly, Just use: Create View dbo.MyView AS Select * From dbo.schoolEnrollmentNumberselemonly Union all Select * From dbo.SchoolEnrollmentNumberselemcombo Then you can call dbo.MyView from whereever -Chris Mackin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Klos, Susan Sent: Thursday, September 30, 2004 1:34 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: [dba-SQLServer] Union query Please be kind. I am doing this from an Access viewpoint and am having trouble making a transition. I have created a union query: Select * >From dbo.schoolEnrollmentNumberselemonly (this is a view) Union all Select * >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) Now I need to be able to call this query in another view. Is this possible? How? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Thu Sep 30 15:03:49 2004 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 30 Sep 2004 13:03:49 -0700 Subject: [dba-SQLServer] Union query In-Reply-To: <01B619CB8F6C8C478EDAC39191AEC51EE738B0@DOESEFPEML02.EUS.FLDOE.INT> References: <01B619CB8F6C8C478EDAC39191AEC51EE738B0@DOESEFPEML02.EUS.FLDOE.INT> Message-ID: Write this code in Query Analyzer On Thu, 30 Sep 2004 15:55:09 -0400, Klos, Susan wrote: > OK. Where do I write this code? > > Susan Klos > Senior Database Analyst > Evaluation and Reporting > Florida Department of Education > 850-245-0708 > sc 205-0708 > > > > > -----Original Message----- > From: Mackin, Christopher [mailto:CMackin at quiznos.com] > Sent: Thursday, September 30, 2004 3:49 PM > To: dba-sqlserver at databaseadvisors.com > Subject: RE: [dba-SQLServer] Union query > > Certainly, > > Just use: > > Create View dbo.MyView > AS > Select * From dbo.schoolEnrollmentNumberselemonly > Union all > Select * From dbo.SchoolEnrollmentNumberselemcombo > > Then you can call dbo.MyView from whereever > > -Chris Mackin > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Klos, > Susan > Sent: Thursday, September 30, 2004 1:34 PM > To: 'dba-sqlserver at databaseadvisors.com' > Subject: [dba-SQLServer] Union query > > Please be kind. I am doing this from an Access viewpoint and am having > trouble making a transition. > > I have created a union query: > > Select * > > >From dbo.schoolEnrollmentNumberselemonly (this is a view) > > Union all > > Select * > > >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) > > Now I need to be able to call this query in another view. Is this possible? > How? > > Susan Klos > > Senior Database Analyst > > Evaluation and Reporting > > Florida Department of Education > > 850-245-0708 > > sc 205-0708 > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- -Francisco http://ft316db.VOTEorNOT.org Pc This! with out the jargon From CMackin at quiznos.com Thu Sep 30 15:05:15 2004 From: CMackin at quiznos.com (Mackin, Christopher) Date: Thu, 30 Sep 2004 14:05:15 -0600 Subject: [dba-SQLServer] Union query Message-ID: Query Analyzer if possible, alternatively you can set it as the CommandText of an ADO command object in VB(A) and have it execute that way or you can use the View or Query (depending on version of Access) window in an ADP. You will also want to add something like: GRANT SELECT ON dbo.MyView TO Public so the security is set (the above makes it readable to EVERYONE, so make sure to adjust accordingly. This statement can go beneath the Creation code as follows: Create View dbo.MyView AS Select * From dbo.schoolEnrollmentNumberselemonly Union all Select * From dbo.SchoolEnrollmentNumberselemcombo GO GRANT SELECT ON dbo.MyView TO Public GO or it can be run independently after the object is created. -Chris Mackin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Klos, Susan Sent: Thursday, September 30, 2004 1:55 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: RE: [dba-SQLServer] Union query OK. Where do I write this code? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 -----Original Message----- From: Mackin, Christopher [mailto:CMackin at quiznos.com] Sent: Thursday, September 30, 2004 3:49 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Union query Certainly, Just use: Create View dbo.MyView AS Select * From dbo.schoolEnrollmentNumberselemonly Union all Select * From dbo.SchoolEnrollmentNumberselemcombo Then you can call dbo.MyView from whereever -Chris Mackin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Klos, Susan Sent: Thursday, September 30, 2004 1:34 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: [dba-SQLServer] Union query Please be kind. I am doing this from an Access viewpoint and am having trouble making a transition. I have created a union query: Select * >From dbo.schoolEnrollmentNumberselemonly (this is a view) Union all Select * >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) Now I need to be able to call this query in another view. Is this possible? How? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From artful at rogers.com Thu Sep 30 16:23:43 2004 From: artful at rogers.com (Arthur Fuller) Date: Thu, 30 Sep 2004 17:23:43 -0400 (EDT) Subject: [dba-SQLServer] Union query In-Reply-To: <01B619CB8F6C8C478EDAC39191AEC51EE738AD@DOESEFPEML02.EUS.FLDOE.INT> Message-ID: <20040930212343.73256.qmail@web88210.mail.re2.yahoo.com> Assuming that the new query is also a view, just call it as you would any other view or table-select. i.e. v1: select * from t1 v2: select * from t2 inner join t3 on pk = pk v3: select top 25 col1, col2, col3 from v2 order by col1, col2 DESC No problem. Every view is a virtual table. Treat it like a table. HTH, Arthur "Klos, Susan" wrote: Please be kind. I am doing this from an Access viewpoint and am having trouble making a transition. I have created a union query: Select * >From dbo.schoolEnrollmentNumberselemonly (this is a view) Union all Select * >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) Now I need to be able to call this query in another view. Is this possible? How? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From michael at ddisolutions.com.au Thu Sep 30 19:11:22 2004 From: michael at ddisolutions.com.au (Michael Maddison) Date: Fri, 1 Oct 2004 10:11:22 +1000 Subject: [dba-SQLServer] Union query Message-ID: <59A61174B1F5B54B97FD4ADDE71E7D01011BF3@ddi-01.DDI.local> You RUN/execute the statement in QA... don't SAVE it ;-))) cheers Michael M OK. Where do I write this code? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 -----Original Message----- From: Mackin, Christopher [mailto:CMackin at quiznos.com] Sent: Thursday, September 30, 2004 3:49 PM To: dba-sqlserver at databaseadvisors.com Subject: RE: [dba-SQLServer] Union query Certainly, Just use: Create View dbo.MyView AS Select * From dbo.schoolEnrollmentNumberselemonly Union all Select * From dbo.SchoolEnrollmentNumberselemcombo Then you can call dbo.MyView from whereever -Chris Mackin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com]On Behalf Of Klos, Susan Sent: Thursday, September 30, 2004 1:34 PM To: 'dba-sqlserver at databaseadvisors.com' Subject: [dba-SQLServer] Union query Please be kind. I am doing this from an Access viewpoint and am having trouble making a transition. I have created a union query: Select * >From dbo.schoolEnrollmentNumberselemonly (this is a view) Union all Select * >From dbo.SchoolEnrollmentNumberselemcombo (this is a view) Now I need to be able to call this query in another view. Is this possible? How? Susan Klos Senior Database Analyst Evaluation and Reporting Florida Department of Education 850-245-0708 sc 205-0708 _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com