

There are more desperate actions that can be taken to attempt to get logical volumes back online long enough to recover data, such as re-tagging the array. If you don't have a double fault, you should be able to replace the problem drive, force the other online and hope a rebuild completes, the you get to work on getting that data before you have any additional issues.įurthermore, never forget that the SAS cables, backplane, and the controller itself may be having issues if multiple drives are dropping out of the array, especially if this is a chronic issue.

Look for medium errors on multiple drives in the same stripe. Here is a good resource for seeing what the SCSI sense codes in the logs are telling you. The logs that can be pulled from the RAID controller are invaluable in this scenario, both for determining which drive you would want to replace as well as confirming that there is a puncture or not. Holy shit is this restore sloooooooooooooooooow. The damn restore will probably take 8 hours. New drives are set to be there by 10am tomorrow morning. I booted off the CD just to see the process and it looks like it might work. They had two spare drives that were only 250gb's (the dead ones were 300's). Took all damn day to get the Barracuda support renewed so I can download the bare metal restore live ISO. Got them migrated to 365 at least so they have new mail incoming / outgoing. This server was being backed up on a file level, just not on a system state, AD level. Maybe I just export all their mailboxes to PST (around 30) and then get them going in that ASAP?ĮDIT: Should have been more clear. We were working with them in quoting 365. They have no backups as far as the DC goes. They have another server that is running Exchange 2007. So I got a hot one!! Customer had one DC and it lost 2 drives in a Raid 5.
