John W. Colby
jwcolby at colbyconsulting.com
Wed Sep 1 22:52:17 CDT 2004
Jim, I think the merge idea is simply to maximize the sheer number of names in the database. From what they are saying, they get info on various people over time and need to update those people. Perhaps a poll on car buying, or smoking, or whatever. If those fields don't already exist in the "big table" they add new ones to hold the info. Thus from that perspective "sports" is just another interest, no different from hunting or fishing or gun ownership. As for the 64 bit processor helping with 32 bit apps, nope, not at all - however it certainly doesn't slow 32 bit apps down. However I understand that a 64 bit version of Windows 2003 and SQL Server are available. I purchased the Action Pack which AFAICT contains these things. I am hoping that the 64 bit versions may give some tangible benefit. If not, at least I have a current generation motherboard / processor with current generation SATA, gbit LAN etc. I purchased the MSI K8 Neo Platinum motherboard which appears to be a pretty solid system. It provides all the raid 0/1/10 your heard could desire, can directly manipulate 4 SATA drives and 4 IDE drives and can combine these drives in any order into RAID arrays. These machines have gbit LAN built in, coming right off the NVIDIA core (not across the PCI bus). I am putting in a 5 port gbit switch so they can talk to each other at top speed. I have acknowledged TOTAL ignorance about the ins and outs of SQL Server but I am learning as fast as I can. My reading indicates that SQL Server can distribute a database across entirely different machines. Again how this is done I haven't a clue but in order to find out (and test it) I need two machines. I will shortly have two of these A64 machines to experiment with. If this can indeed be done and the results are good, I can take it from there. I will keep the group informed of my findings. If I can get this all to work I stand to have a good solid client and a lot of work. If I can't I stand to have a totally overpowered (but NICE) dev machine and a lot more SQL Server knowledge than I currently possess. In fact my Insurance Call Center client needs to go to SQL Server as well so this will hopefully help me there as well. John W. Colby www.ColbyConsulting.com -----Original Message----- From: accessd-bounces at databaseadvisors.com [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of JMoss Sent: Wednesday, September 01, 2004 10:31 PM To: Access Developers discussion and problem solving Subject: RE: [AccessD] Every 100th record John, I can't help but think that you could lose as much as 10 - 15 % of the volume of this database by doing some type of dedup and / or householding process. I wonder about the logic of mixing lists of different types in one database. It seems to me that the customer wouldn't want to mail an offer to someone from a sports list to someone who got added to a list because they had purchased knitting supplies or some similar type item. The database marketing firm that I worked with kept separate databases. Merging different types of customers into a mail blast seems like a blast or shotgun approach, or someone just selling a lot of addresses in a very non methodical manner. Do you get enough of a performance boost from a 64 bit processor running a 32 bit OS or application to make it worth while? I would think that multiple 32 bit cpu's would be the ticket for 32 bit apps. Jim -----Original Message----- From: accessd-bounces at databaseadvisors.com [mailto:accessd-bounces at databaseadvisors.com]On Behalf Of John W. Colby Sent: Tuesday, August 31, 2004 7:25 PM To: 'Access Developers discussion and problem solving' Subject: RE: [AccessD] Every 100th record In fact the client has another database that has 125 million addresses, another that has 14 million more people, and another that handles all their sports mailings. They would like to merge them all. I just bought a 3ghz socket 754 Athlon 64 which I am loading with Win2K and SQL Server tonight. I can only pray that this gives me SOMETHING in the way of a speedup against my old AMD Athlon 2500. I have to examine my options, down to splitting up the database and having different machines process pieces. I also have to learn to tune SQL Server. Since I am starting from "know absolutely nothing" it shouldn't be too hard to get better results over time. ;-) John W. Colby www.ColbyConsulting.com -----Original Message----- From: accessd-bounces at databaseadvisors.com [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of Arthur Fuller Sent: Tuesday, August 31, 2004 7:33 PM To: 'Access Developers discussion and problem solving' Subject: RE: [AccessD] Every 100th record Just to put things in perspective, JC, the first client of the people who developed MySQL had 60M rows in their principal table. There are lots of apps way bigger than that. I once had a client that was adding 10M rows per month to the table of concern (this was an app recording seismic activity from several hundred meters). I must caution you that you should not use the term VLDB as loosely as you have been using it. You don't know the meaning of VLDB -- not yet at least. You're beginning to appreciate the turf, however. Once I bid on a project that had 100M rows each containing a graphic file. Not to say that size is everything, but IMO VLDB comprises at least a TB, and often many hundreds of TBs. I just got a contract with a company using MySQL whose test database's most important table comprises 100M rows. They expect their clients to have 10* as many rows. My job is to optimize the queries. Fortunately, I can assume any hardware I deem necessary to do it. They are after sub-second retrieves against 1B rows, with maybe 1000 users. Life's a beach and then you drown. I don't know if I can deliver what they want, but what I can deliver is benchmarks against the various DBs that I'm comfortable with -- SQL 2000, Oracle, MySQL and DB/2. I figure that if none of them can do it, I'm off the hook :) The difficult part of this new assignment is that there's no way I can duplicate the hardware resources required to emulate the required system, so I have to assume that the benchmarks on my local system will hold up in a load-leveling 100-server environment -- at least until I have something worthy of installing and then test it in that environment. I sympathize and empathize with your situation, JC. It's amazing how many of our tried-and-true solutions go right out the window when you escalate the number of rows to 100M -- and then factor in multiple joins. Stuff that looks spectacular with only 1M rows suddenly sucks big-time when applied to 100M rows. Arthur -----Original Message----- From: accessd-bounces at databaseadvisors.com [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of John W. Colby Sent: Tuesday, August 31, 2004 7:13 AM To: 'Access Developers discussion and problem solving' Subject: RE: [AccessD] Every 100th record Paul, In fact I am trying to make this run on my home system which is part of the problem. This week I am playing "stay-at-home dad" as my wife starts the chhool year this week and has all those 1st week teacher meetings / training. I have never come even close to a db this size and it has definitely been a learning experience. Here's hoping I survive. John W. Colby www.ColbyConsulting.com -----Original Message----- From: accessd-bounces at databaseadvisors.com [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of Paul Rodgers Sent: Tuesday, August 31, 2004 3:49 AM To: 'Access Developers discussion and problem solving' Subject: RE: [AccessD] Every 100th record 65 million! What an amazing world you work it. Is there ever time in the week to pop home for an hour? Cheers paul -- _______________________________________________ AccessD mailing list AccessD at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/accessd Website: http://www.databaseadvisors.com -- _______________________________________________ AccessD mailing list AccessD at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/accessd Website: http://www.databaseadvisors.com