Jim Lawrence
accessd at shaw.ca
Fri May 7 09:25:36 CDT 2010
For a brief moment there Terabyte was the size ultimate but that point passed quickly and the computer world went on to Pentabyte; well that point of ultimate has now again been surpassed as we have Zettabytes. http://www.guardian.co.uk/technology/2010/may/03/humanity-digital-output-zet tabyte Some latest information from some who have been experimenting with the new super database called Cassandra. Notice through the performances trials the CPU utilization remains flat! The team pushed the product to see where better utilization could be achieved and noted setting caching would help performance... But keep in mind this DBs performance is so far beyond our standard SQLs. http://jamesgolick.com/2010/4/4/two-weeks-with-cassandra.html An aside: It will be a while before the hard drive bottle-neck is really fully resolved. Right now splitting a data store across numerous drives, all indexed and cashed is the only way to lessen the performance pain. That is why the new breed of distributive databases are so fast because they manage the hardware layer so well. Our current crop of standard SQLs just leave the hardware to manage it's self. Jim