id rather have a 1 1tb 7200 then 4 300gb 10k drives
10k just isnt much faster, and raid 0 just makes me uncomfortable, one dive and boom
just get a SSD and a HDD
Well Stars, heres the deal, If U think that the "BOOM" factor is limited to my RAID 0 vs your 1 TB then ur wrong.
ANY HD is subject to a "BOOM" factor and when that happens I don't give a rats ass wtf u are runing ALL info is GONE.
SO........ That's why they make 1-5TB back optional external HD to keep ur shit safe!
PLUS---------------- If U really wanna go techy then do a quad raid where u can mirror image a RAID 0 on a seperate set of Raptors OR any other comapatable pair of HD to have all ur goodies safe if u ever crash and burn...
LOL -- and u thot I was just another pretty face :P Not bad for an ol guy HUH ??
Cmon ppl I KNOW ur smarter than that :P
My next project is to build a SUPER COMPUTER with 4 boxes ALL identical with the best hardware avaiable atm + having them all linked operating under 1 entity
IF u have any links or advice,, let me know (pls don't insult my intelligence with a link to Toms Hdwr + I don't mean that in an belittling way)
I never said the boom factor was limited to raid 0, however it raises the chance of a drive failure quite a bit, 2 drives are 2 times as likely to fail, 3 drives are 3 times as likely to fail, and so forth. And raid 10(or 01... whatever the hell it is) that you describe is insanely expensive.
Drive failure is something that is not uncommon in my household, (both WD and Seagate, as thats what we usually get) so my uneasiness about drive failure is founded.
Although SSD raid 0 looks interesting to me as there failure rate is so small, and SSD raid 0 is alot more noticeable then HDD 10k(or hell even 15k) raid 0
http://www.tomshardware.com/reviews/...rive,2775.html
well,,
I actually took the time to read up a bit on SSD's..... at this link
http://en.wikipedia.org/wiki/Solid-state_drive
It appears that while the SSD 'might" be the wave of the future, the future of SSD's is not here yet...
The writing + rewriting will diminish the drive quite rapidly vs the std HDD available atm, PLUS, if U defrag, along with any other std maintenance of any box and its components, it will wear out in HALF the time or sometimes less.
Soooooooooo, until they have perfected the SSD scenario, my take and advice is,,,STAY THE HELL AWAY from this "trendy" product until it has been perfected beyond fault, at which time I may even use them for my rigs, but DEFINITELY not now and as I understand it not in the next 6 months either...........
Obviously your call, BUT I am trying to give you all a HEADS UP, because I can almost guarantee the SSD of now stats WILL fail and if you want to jump into new technology w/o a multitude of testing, throw away your hard earned money, the by all means be a free beta tester for the new SSD's BUT, please make them your secondary HDD and if you do not, then remember ONE thing when the "shit hits the fan" which wil be
I T O L D Y OU SO
DISCLAIMER - I do not work for, nor am I related to anyone to the best of my knowledge, that works for, has worked for, or represents, cisco, Linksys, or any of its affiliates, vendors, ortherwist the past\
\
The biggest flaw in SSDs as far as I know has actually been controller reliability - especially the JMicron controllers which were common prior to the recent Sandforce controllers being released. It seems the Sandforce controllers are holding up better, but still not the best (higher-than-HDD rates of DOAs, etc.). This is my own ancedotal hearsay, though, haven't really researched it.
As far as write-erase cycle limitations, here are a couple useful things:
WD white paper on NAND flash in SSD applications: http://www.wdc.com/WDProducts/SSD/wh...ution_0812.pdf
Article I found with some easier-to-digest information than the white paper: http://www.storagesearch.com/ssdmyths-endurance.html
There are other similar articles but they're nothing more than dumbed-down versions of the above.
While MLC NAND flash chips have been improving, the typical quote I've heard for endurance is 10,000 write-erase cycles per block minimum (~5% or less, usually less, of all blocks failing this early, so not a major concern), 100,000 typical (~95% or more of all blocks lasting this long), and going up from there. I've heard some claiming write limits in the millions - but I'm not so sure for MLC-based SSDs that this is true. It's certainly possible that SLC has achieved that, but no consumer-grade SSD uses SLC flash. Perhaps these higher numbers have been achieved in enterprise-grade SSDs, but I'm concerned only with consumer-grade.
In any case, if you take the data used in the article above for constant-write testing, and extrapolate it to fit a modern Sandforce-based SSD, you end up with an approximate lifetime of 1.5 years for a 120GB SSD - with the drive being written continuously at maximum throughput. This will never, ever happen in a consumer environment - it would rarely even happen in a corporate or educational environment.
Even with heavy usage, in a consumer environment an SSD will last for 5-10 years easily before it begins to fail from write-erase cycle limitations. It's far more likely that your controller will go bad before the actual flash chips do - and for all the talk of HDDs having infinite write-erase cycles, they sure as hell have a finite number of move-the-stupid-ass-mechanical-arm cycles, as I've experienced many times.
In short, write-erase cycle limits are not a reason to avoid SSDs, and regardless of what drive solution you use, you should always have an external back up of your data if you care about it.
Oh, and don't browse Wikipedia for serious research... lol, though I'm a bit confused how you got those doomsday ideas from the Wiki article, you'll notice the article I referenced above is also referenced in the Wiki (citation #52).
Woah, there. Don't yell, we'll get off your lawn.
In addition to what Tickle Me Emo posted, you need to understand that SSDs aren't part of some crazy new wave of the future. They've been around for a good long while, and they're built on other technologies that have been around for much longer. There's no consumer "beta testing," nor any lack of reliability testing at all with these products, and no chance that they'll start failing arbitrarily. The thing about solid-state hardware is that aside from environmental effects, the consequences of which are well known and not pertinent to the product, it's entirely possible to accurately simulate years of use in hours of testing, and of course this is done extensively before any product hits the market. The drives sold today will continue to outperform traditional mechanical drives for many years, and will likely outlive them as well.
It's funny that you should mention Cisco in your disclaimer. Bar none, and by miles and miles, solid-state storage in Cisco products has the highest failure rate of any solid-state storage implementation that I have ever worked with.
Not that the disclaimer is really necessary. Cisco are precisely the opposite of you. Not only do they embrace new technology; they also market and sell it before they've matured it.
Last edited by Fluffy Frufflebottoms; 11-22-2010 at 12:16 PM.
The Cisco disclaimer was form an earlier post that was actually on topic a while back, and the part about the defrag was mentioned in a article that claimed execessive wear + tear, other than that the RAM issue got kinda lost I guess
Anyway, good sound opinionated discussion, I enjoyed it and learned a few things along the way...
Last edited by DeadEyeDeNNi$; 11-23-2010 at 09:17 AM.