raid   7581

« earlier    

8 Tips to Make Best Use of Network Attached Storage (NAS) Devices
Network Attached Storage (NAS) is beneficial to users in multiple aspects, including file sharing, data backup, etc. Due to this fact, more and more users start to use it. In this article, we will list 8 tips to help users get the most out of it.
Auto  Backup  Downloads  NAS  Network  Attached  Storage  outlook  recovery  RAID  Remote  Access  Share  Data  Uninterruptible  Power  Supply 
4 days ago by DataNumen
IOps?
The basics are simple, RAID introduces a write penalty. The question of course is how many IOps do you need per volume and how many disks should this volume contain to meet the requirements?
storage  hardware  raid  server  sysadmin 
8 days ago by dusko
SnapRAID
A backup program for disk arrays. It stores parity information of your data and it recovers from up to six disk failures
n54l  raid  backup  storage  zfs 
16 days ago by e2b
5 Top Advantages of Common RAID Systems
Most people know that RAID is widely used in the contemporary era. Yet, only a few of them realize the actual advantages of it. Thus, in this post, we will exhibit you 5 primary merits of RAID.
Advantages  of  RAID  Continuous  System  Running  Fast  Speed  Fault  Tolerance  Large  Storage  Parity  Check  pst  recovery  Systems 
4 weeks ago by DataNumen
4 Most Common Types of RAID System Failures
Nowadays, both businesses and individuals start to use RAID systems so as to gain improved performance and additional storage. Although RAID system does provide users with many benefits, it can fail due to its vulnerable components. This article will list out 4 top types of RAID system failures.
Application  Failures  damaged  PST  Hardware  Human  Error  RAID  System  Software 
4 weeks ago by DataNumen
How to do SSD RAID5 the right way
As Kooler stated, RAID 5 issues that plague spinning drives are not there for SSD.

Even though you hear how RAID 5 is dead on SpiceWorks, the truth is RAID 5 will/could make a comeback in the SSD world.
================
snorble wrote:
* I've never done SSD RAID5 before, so I'm not sure how reasonable this is. I guess my main questions are:
* Are the cheaper SSDs like Intel S3700 viable for this? Or are more expensive SSDs needed to make this reliable?
* Does the RAID controller matter? I have seen people mention that with SSD RAID5 you want a controller with a decent write cache, but I'm not sure what qualifies. Is a PERC H730P w/ 1GB or 2GB NV cache good enough for this?

A: RAID 5 with SSD is completely fine. Cheaper SSDs can be bad, they will potentially die rather quickly. More SSDs do not increase reliability, adding more drives lowers reliability.

RAID controller matters a lot. 2GB is ideal, 1GB is good for cache. With any number of SSDs, you need a really good controller or the controller itself becomes the bottleneck.
================
Lothar863 wrote:
no love for raid 50 or 60?

> For any x0 RAID level you need some massive scale for it to make any sense.
================
timmwatts wrote:
I am not sure but when I read question, there is bit of contradiction why?
- Needs High through put for IOPS
- RAID5 is selection
- Cost is being considered
Above three points oppose each other.

> This is SSD. RAID 5 with SSD is still so many IOPS that while it isn't as fast as RAID 10, of course, you can still easily saturate the potential throughput of the controller.
================
timmwatts wrote:
Q: If storage is required, then RAID5 is superseded by RAID6 with more reliability in comparison with RAID5 less stress & less time to rebuild.

A: With SSD, RAID 6 doesn't have that much more reliability than RAID 5 because RAID 5 is more reliable than you think and RAID 6 has so much more write expansion. Certainly RAID 6 is still safer, but not by the degree that you might be thinking.
================
timmwatts wrote:
Now if cost is consideration, why SSD? Certainly lifespan of SSD is not being considered in comparison with SAS MTBF.

> SSDs last longer than SAS when looking at enterprise drives. SSDs are crazy reliable.
================
hutchingsp wrote:
I agree that it sounds like FUD but I do also wonder how much we know about real world workloads with consumer SSDs?

How many people have ran an R720 or DL380 stuffed with consumer drives for 3 years to have some real world data (however small a sample set) of how it is to live with?

> My guess is a lot of FUD too. Some real concerns, but more FUD than reality. Maybe I can convince NTG to put this in a lab to test. Consumer SSDs aren't that crazy expensive.
================
Scott Alan Miller wrote:
RAID 5 with SSD is completely fine. Cheaper SSDs can be bad, they will potentially die rather quickly. More SSDs do not increase reliability, adding more drives lowers reliability.

RAID controller matters a lot. 2GB is ideal, 1GB is good for cache. With any number of SSDs, you need a really good controller or the controller itself becomes the bottleneck.

> Bigger cache, means you can delay writes farther, meaning a bigger chance of a full stripe update, and psudo tiering on blocks that get slammed a lot reducing hits to the drives.
================
We've got several R710s and R720s maxed out with consumer-grade flash (mostly OCZ stuff from before they were bought out - Agility 3 and Agility 4 in the main) and they've all been quite happily running SAP systems for ages without a hiccup. The only failed drive we've had in nearly 5 years was a pair of Intel SSDs with very close serial numbers; its 8 neighbours in the array are still going strong. Altogether we've got well over 100 SSDs with, so far, no slowdowns and no failures, alongside the two that have died and have since been replaced - a similar failure rate to what we used to get with HDDs. They're all running behind 2GB Dell PERC RAID controllers and the performance is superb - the RAID controller is always the bottleneck. We started off with RAID10 but have since transitioned to RAID5 and there's been no noticeable change in the number of IOPS - because the RAID controller is the bottleneck, and because of the near-zero latency for random reads, a mirror of the data just doesn't help. One thing to be aware of though - make sure you update the firmware on your RAID controller or your performance won't be as good as it should be, as older firmware versions make assumptions about how HDDs work that simply don't apply to an SSD.
================
esraerimez wrote:
I'm curious, how would one go about doing TRIM on an SSD RAID 5? I know Intel has RST drivers for software RAID that pass TRIM to the drives, but how would you do this with hardware RAID?

> Hardware RAID is about 100% abstraction and encapsulation. That's 100%. That means that "you" don't do anything. This isn't sofware RAID or Intel FakeRAID, this is genuine hardware RAID. In choosing hardware RAID you pass all responsibility for drive communications to the controller and that's it, you are done with it. The controller handles TRIM (or doesn't, depends on the controller.) The controller does everything. You don't worry about any of those things anymore because the drives are not attached to your server, they are attached to the controller and the controller is attached to the server.
================
raid 
7 weeks ago by dusko
RAID: Parity RAID vs SSD
The post describes the history of RAID 5 and how it became obsolete at some point in time, just because HDD capacity grew at an enormous rate. It happened due to the chance of failure that grew to literal imminence when spinning disks reached TB scale, because the reading speed still had the same physical limits. Basically, creating a RAID 5 even with 1 TB disks would mean certain failure of the whole array and quite soon. The array technology was “saved” by an unlikely ally – the SSD. Being faster than hard disk drives in everything, they almost nullify the chance of the above mentioned failures. The post is written for everyday reader, not just engineers, and is quite comprehensive even without special knowledge and skills.
================
Solution
Using SSDs renders RAID 5 immune to the reliability issues, because they stay in the vulnerable state just a fraction of the time HDDs do. The key factors here are:
SSD has aptitude for prompt random data access
Smaller capacity, which means shorter recovery
Read-modify-write sequence goes much faster on flash
Bit rot eliminated thanks to better hash sums in drive controller level
Once the time of lost redundancy is almost nullified, there’s only a small and acceptable chance of RAID going down. The fact, that SSD is also prone to failing due to wearing out is irrelevant, because this issue is very predictable. Besides, overall SSD life can be prolonged by workload balancing, for example striping. Flash price seems to be the only drawback, but it’s gradually going down every year. Realistic studies predict flash cheaper than SAS HDD as soon as 2017.
================
Conclusion
Utilizing SSD renders RAID5 immune to parity RAID issues, because it minimizes rebuilding time to the point, where failure risk is almost nullified. While high-capacity HHD made RAID 5 unreliable, flash turned them relevant again.
================
raid 
7 weeks ago by dusko
RAID 5 vs RAID 10 for SSDs in 2016
I built mine with a RAID10 array of SSDs three years ago. I just did a 16-drive RAID 5 array for an ESXi server and while bench-marking shows a world of difference, you don't see it in real life. The LEDs on my drives are constantly active, and I have 128GB of ram in front of them.

You'll be fine with RAID5.
================
DustinB3403:
"Jimmy9008 said: You get SSD for great performance and the cost is high. Spending that money, then using RAID 5, which removes a large chunk of the performance you just paid for, seems like a step backwards.

"I'm going to buy a Ferrari and then have it limited to 100 KMPH." - Why get the Ferrari..."

Except that the SSD array is way faster than Spinning Rust..... so you're still going to go way faster.... your analogy is't quite accurate. 

It would better to say something like I have a Ferrari that could go 600 KMPH, but you'd only be able to have 1 person in, where as with RAID 5 SSD array, you have a Ferrari that can go 300 KMPH, but you can have two people and a bag of groceries or two as well.
================
mtoddh:
AVOID RAID 5!!!

We had (2) RAID 5 disasters, which were devastating.

RAID 10 is the safest way to go. Yes, it does cost more, as it takes more disk hardware for the storage space, but it is always worth it!!!

RAID 5 is old technology created when disk space was limited and very expensive. It should not be used in current technology. RAID 6 is just a beefed up RAID 5 with far less chance of failure.

I don't care whether you are using SSD's or the classic spinning HD's. AVOID RAID 5!!! 
================
As to the topic question; which will management find more acceptable. The extra speed, or the extra storage? Personally, now days I'd go with RAID 10, but in the past when drive size was not as good, RAID 5 would have been my choice.

I'm about to start rebuilding a NAS that has had a drive and volume failure. Originally it was RAID 5 with 2TB SATA drives (the term spinning rust really annoys me for some reason, I've yet to see a platter rust unless it was covered in a liquid first); replacing them with 4TB drives and these will be RAID 10.
================
Dell have dropped RAID5 from the list of supported levels on there storage arrays, this is because 99% of them are now on larger disks with long rebuild times and sensibly dont want to expose customers to the risk of a URE during rebuild, that being said i do love the "NO NO NO ANYTHING BUT RAID5, THINK OF THE CHILDREN" comments again, as i have stated before, and probably will do again, there are probably more RAID5 arrays happily working and many more will d for many years to come, RAID5 is fine on SSD's and acceptable on spinning disks that the data is not critical, RAID6 preferred on spinners but depends on the situation, if you ar having issues with your RAID arrays then you need to check how they are configured

Personally I would be pushing it all up to 365 and letting MS take the load, its what i do, not having to deal with huge exchange servers anymore is a godsend especially when you need to defrag large stores and deal with ones that get left until they just fall over.
================
Q: Correct me if I'm wrong, but I thought it was frowned upon to use SSDs in RAID because they have limited read/writes?

A: That is wrong (not the limited read/writes issue, that is correct) but the number of read/writes is a massive number.

So it's really not a concern in any business that is looking at an SSD Array.
================
mtoddh wrote:
With RAID 5, when (not if, but when) a drive fails and the automatic rebuild starts, if any of the other drives suffer an unrecoverable read error (URE), the entire array will fail. This is not a recoverable situation. Instead, you must reformat and restore from your backups (tape or otherwise).

> The issue that you and a lot of other people are forgetting is that the MTBF for an SSD is such a greater number when compared against the chances of hitting a URE.

Using RAID 10 although better with Spinning Rust Disks in the multi-terabyte range is completely sound because it avoids parity across multiple disks. This is a good thing, because if you hit a URE on a parity disk, the array is toast.

RAID 5 on SSD's don't have this issue, the capacity is much lower in comparison and the performance is so much higher that the chances of getting a second failed SSD on an already degraded array would be comparable to winning the Mega Millions Jackpot.

It's just not very likely to occur. It's also a huge waste of storage capacity for very little gain.
================
onamor wrote:
Hi Dustin
for large businesses I would get a real storage. In my case I'm speaking of a VMware lab for VDI and in this case (Office not the main application) performance matters. Unfortunately I can't afford such fancy things like nVidia Grid cards for CAD users but still IO counts.Thamks anyway for your reply, I'm still eager to get as many opinions as I can.

> What do you mean "real storage". Local storage is real storage that is attached to the server.

What kind of scenario are you seeing as upcoming?

The hypervisor doesn't care what kind of storage you have, so long as it can access it. The performance will be impacted based on different storage types, Local > DAS > NAS > SAN is the order of preference.
================
Q: Tim_G wrote:
What about in a Hyper-V situation where you have lets say 15 VM's running on SSD's, along with some storage shares on SSD's.

Do you go RAID5 or RAID10? How likely are you to run in to a failure during a RAID5 rebuild? I know VM's themselves don't really do much at all, but database and file share stuff does.

They are

800GB Solid State Drive uSATA Mix Use Slim MLC 6Gbps 1.8in Hot-plug Drive, S3610

6 of them in RAID10, or 5 of them in RAID5 plus hot-spare?

A: RAID 5.

The usage of the device doesn't effect the MTBF ratio.... the SSD is still as reliable sitting idle or at max usage...
================
raid 
7 weeks ago by dusko
RAID5 is ok for SSD drives (on a particular machine)
For what is worth, I recently put on service a $100K IBM Power server (a high performance database server) with 6 SSD drives. I was aware of the issue listed by OP and I escalated a support request about the preferred RAID level to the IBM main offices. The answer was that RAID5 is the best choice for SSD drives on that machine; RAID 10 was explicitly not recomended. I believe that the RAID5 issue listed on the original paper is either outdated or biased to promote other products.

According to my experience, SSD failure rate is far lower than mechanical drives. Over about 100 drives, I had one failure due to a known Intel SSD BIOS bug and another failure almost immediately after putting the drive in production (a obvious manufacturing flaw).
raid 
7 weeks ago by dusko

« earlier