I know how RAID work and prevent data lost from disks failures. I want to know is possible way/how easy to recover data from unfunctioned remaining RAID disks due to RAID controller failure or whole system failure. Can I even simply attach one of the RAID 1 disk to the desktop system and read as simple as USB disk? I know getting data from the other RAID types won’t be that simple but is there a way without building the whole RAID system again. Thanks.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Why wouldn’t you just use software raid? It is way more robust and if you are using ZFS you get all the nice features that come as a part of it.

    • computergeek125@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I never said I didn’t use software RAID, I just wanted to add information about hardware RAID controllers. Maybe I’m blind, but I’ve never seen a good implementation of software RAID for the EFI partition or boot sector. During boot, most systems I’ve seen will try to always access one partition directly and a second in order, which is bypassing the concept of a RAID, so the two would need to be kept manually in sync during updates.

      Because of that, there’s one notable place where I won’t - I always use hardware RAID for at minimum the boot disk because Dell firmware natively understands everything about it from a detect/boot/replace perspective. Or doesn’t see anything at all in a good way. All four of my primary servers have a boot disk on either a Startech RAID card similar to a Dell BOSS or have an array to boot off of directly on the PERC. It’s only enough space to store the core OS.

      Other than that, at home all my other physical devices are hypervisors (VMware ESXi for now until I can plot a migration), dedicated appliance devices (Synology DSM uses mdadm), or don’t have a redundant disks (my firewall - backed up to git, and my NUC Proxmox box, both firewalls and the PVE are all running ZFS for features).

      Three of my four ESXi servers run vSAN, which is like Ceph and replaces RAID. Like Ceph and ZFS, it requires using an HBA or passthrough disks for full performance. The last one is my standalone server. Notably, ESXi does not support any software RAID natively that isn’t vSAN, so both of the standalone server’s arrays are hardware RAID.

      When it comes time to replace that Synology it’s going to be on TrueNAS

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 months ago

        Your information is about 10 years out of date. It is trivial to do boot with raid as in EFI you just set both drives as bootable.

        I think hardware raid is only for the last resort as Windows has Storage sense and Linux has ZFS, LVM and mdadm. I’ve never heard of a hardware raid system that has the features a lot of these systems have like data integrity checking and ram caching.

        Essentially I don’t really see a need for hardware raid in a home environment and there isn’t a huge need in the business.