jwcolby
jwcolby at colbyconsulting.com
Mon Jan 9 09:44:42 CST 2012
Francisco, > the only issue would be that if the backup process would choke the server. it depends on the hardware. In the middle of a campaign I would probably not do a backup. Any other time there is very little traffic. > I'm assuming you're asking the list and that you are referring to the transaction log. and the answer is yes, it does empty, it empties all completed checkpoints. however the size of the file won't change. Yes, I am asking the list and yes, about the transaction log. This database is used to store what happens on a "medium volume" web site. It is an advertising web site where people come to get coupons for products, Aleve pain reliever in the latest campaign. They apparently get as many as 10,000 "visits" a day at the height of a campaign, with a few events (clicks to various pages) per visit. That seems to me to be perhaps 6 visits per minute, less than one "event" per second. >If the customer does not need up to the minute recoverability the recovery model should be set to simple. It doesn't look like "extreme recoverability" is required. >> 5) Would creating a clustered index on that "PK" help make things faster? > is the pk used for searches? No, not used at all as far as I can tell. I guess I am thinking about heap versus ... The specific table I am concerned about is the event table. This records web page events (clicks and stuff). There is a "session id" that "groups" events but otherwise nothing specific or unique about a session making a good candidate key. There is already an autoincrement int, it was just never designated the PK. I am proposing turning that into a PK. My question really is whether doing so slows down the insertion of event records in any way. Placing an index on Session ID allows me to gather info about sessions, but does having them in a clustered index make that process any faster? Is it ever preferable to store in a heap vs on a clustered index? > depends on the hardware, if there are no updates and simply inserts, then adding the index may be negligible. Events are what they are, data about events that occur on the web site. Never updated in any way AFAICT. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/9/2012 9:16 AM, Francisco Tapia wrote: > On Thu, Jan 5, 2012 at 14:43, jwcolby<jwcolby at colbyconsulting.com> wrote: > >> 1) This appears to be a standard SQL Server setup, i.e max memory is >> insane, all cores assigned to SQL Server, compression is not turned on and >> so forty. Is there a reason not to compress the backup? > > > the only issue would be that if the backup process would choke the server. > it depends on the hardware. > > >> 4) Once backed up what happens to that log file? Does it automatically >> "empty"? >> > > I'm assuming you're asking the list and that you are referring to the > transaction log. and the answer is yes, it does empty, it empties all > completed checkpoints. however the size of the file won't change. If the > customer does not need up to the minute recoverability the recovery model > should be set to simple. > > >> 5) Would creating a clustered index on that "PK" help make things faster? >> > > is the pk used for searches? > > >> 6) Many of the statistics group by the session ID. Obviously indexing >> that and a few other key fields would speed up reporting but slow down >> creating these log records but would it slow it down enough to make any >> difference? It seems unlikely to me but I don't have much experience in a >> situation like this. What I do know is that once created these event >> records are apparently never modified. >> > > depends on the hardware, if there are no updates and simply inserts, then > adding the index may be negligible. > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > >