Bill Benson
bensonforums at gmail.com
Thu Mar 13 13:32:41 CDT 2014
Sounding stupid and being wrong are two different matters. So here is a excerpt regarding static wear leveling (which is the subject we are talking on) " This means that a static block that contains valid data but that is never being written to will still be moved around within the flash in order to allow all blocks to receive the same amount of wear. It's easy to understand why this would help, since those blocks would otherwise be unavailable to the wear leveling algorithm. On the other hand, if you're trying to minimize write/erase cycles, it's unintuitive that this approach is the better of the two." I buy the unintuitive part alright! Here is the rest of the article which probably has some of the same stuff John got his info from. Which Ai am still trying to learn from but doubt I can. http://thessdguy.com/how-controllers-maximize-ssd-life-better-wear-leveling/ And NOW, I will go back to Access. AFTER my lunch period. On Mar 13, 2014 2:04 PM, "Bill Benson" <bensonforums at gmail.com> wrote: > To me this sounds STUPID. Not stupidly Described, but stupidly contrived. > Therefore since people are making and spending scored of Billions on it, > CLEARLY I DON'T UNDERSTAND IT. > > It sounds like this: I know disk areas are getting worn down so I will > just wear down ALL my areas. LOL+LOL to the Google power. > > I mean, how stupid is that?? So what, I can write some MORE data later to > the thinly used area when I am ready to REAL write something? So that I can > write SOME MORE to an area I already decided had been written to too > often? I MEAN SERIOUSLY (!) > > Isn't it easier just to leave well enough alone? Just spread out your > actual writes over the lesser used places? > > I am bowing out of this discussion and going back to Access! > On Mar 13, 2014 1:41 PM, "John W Colby" <jwcolby at gmail.com> wrote: > >> The controller is doing a completely behind the scenes movement of blocks >> of data. From what I have read it does so when there is nothing else going >> on. Basically the controller looks for high wear level blocks which still >> function but need to stop being written to. IOW, blocks that have been >> written to many times. It then grabs a block of data from a low wear level >> AND apparently static - not written very often or for a long time - and >> moves that block to the high wear level block. The idea is that if a block >> isn't updated often then it can go in an area that has already been written >> a lot (but still functions) and so drop the future writes to that high use >> block. >> >> The SSD apparently keeps dynamic counts of the number of times a block of >> NAND has been written. So you write a spreadsheet to SSD for example. It >> sits there for a year, looked at but never written / updated. The NAND >> storage location that it sits in has a use count of 1. You have another >> location which is a small MDB. You are writing to that daily. Literally >> (from the PC's perspective) the same area of the "disk" is read / updated / >> updated / updated. So that NAND storage location may get an update count >> of 1000 in a month. Keep in mind that NAND blocks are somewhat small, 64K >> perhaps. >> >> At any rate, the SSD controller is watching the number of times every >> NAND block is written to. It then sees that one block has a count of 1 and >> another has a count of 1000. So behind the scenes, it swaps those two >> blocks. The spreadsheet now sits in a block written 1000 times and the >> database sits in a block that has been written once. Since the spreadsheet >> is never written it doesn't matter that it sits in a high wear location. >> The MDB however gets an unused block or NAND. This is called "wear >> leveling". All SSDs do this. >> >> All behind the scenes, and all done in controller logic inside of the >> SSD. The PC / OS never knows it is occurring. >> >> John W. Colby >> >> Reality is what refuses to go away >> when you do not believe in it >> >> On 3/13/2014 1:05 PM, Bill Benson wrote: >> >>> See what you've started, now gonna get flamed cuz this is so none >>> programming and for sure non-DB. >>> >>> Colby" <jwcolby at gmail.com> wrote: >>>> the controller itself will move that static data around to allow other >>>> >>> dynamic data to "use" the areas not yet written... >>> >>> So is controller performing a >>> >>> Read(old position info, old data)/re-write(old data, new position >>> info)/write (new data and new position info) >>> ? >>> >>> Versus >>> >>> a read (position info)/write (data and position info) >>> >>> Isn't that an efficiency give-back? >>> >>> Do spin up drives do this too? >>> >>> I wonder how fast SSDS would be if they did not do this or were hardened >>> to >>> handle more writes. >>> >>> I also don't tend to do a lot of writes to HD, unlike storage drive. Now >>> I >>> am worried about the longevity of my NAS. >>> >>> I realize it is "not so simple" but how the heck, with all a PC has going >>> on, can it be practical to optimize something like this? And to get it >>> right (er, write?) In terms of how many disk atoms have been written to >>> an >>> "even" number of times. >>> >>> I've read about drives ad nauseum but I am neither a mechanical not an >>> electrical engineer nor nanotechnology savvy. The stuff doesn't come to >>> me >>> readily and state of the art doesn't stay still, for long enough... and >>> lack of uptake capability (gray matter). >>> >>> So I may not be worth your time replying -- and if you don't, I guess we >>> both know why. >>> >> >> >> --- >> This email is free from viruses and malware because avast! Antivirus >> protection is active. >> http://www.avast.com >> >> -- >> AccessD mailing list >> AccessD at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/accessd >> Website: http://www.databaseadvisors.com >> >