I know how RAID work and prevent data lost from disks failures. I want to know is possible way/how easy to recover data from unfunctioned remaining RAID disks due to RAID controller failure or whole system failure. Can I even simply attach one of the RAID 1 disk to the desktop system and read as simple as USB disk? I know getting data from the other RAID types won’t be that simple but is there a way without building the whole RAID system again. Thanks.

  • tburkhol@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    RAID is more likely to fail than a single disk. You have the chance of single-disk failure, multiplied by the number of disks, plus the chance of controller failure.

    RAID 1 and RAID 5 protect against that by sharing data across multiple disks, so you can re-create a failed drive, but failure of the controller may be unrecoverable, depending on availability of new, exact-same controller. With failure of 1 disk in RAID 1, you should be able to use the array ‘degraded,’ as long as your controller still works. Depending on how the controller works, that disk may or may not be recognizable to another system without the controller.

    RAID 1 disks are not just 2 copies of normal disks. Example: I use software RAID 1, and if I take one of the drives to another system, that system recognizes it as a RAID disk and creates a single-disk, degraded RAID array with it. I can mount the array, but if I try to mount the single disk directly, I get filesystem errors.

    • Big_Boss_77@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I’ve never been a big fan of RAID for this reason… but I’ve also never had enough mission critical data that I couldn’t just store hard copy backups.

      That being said… let me ask you this:

      Is there a better way than RAID for data preservation/redundancy?

      • Björn Tantau@swg-empire.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Just for drive redundancy it’s awesome. One drive fails you just pull it out, put in a new one and let the array rebuild. I guess the upside of hardware RAID is that some even allow you to swap a disk without powering down. Either way, you have minimal downtime.

        I guess a better way would be to have multiple servers. Though with features like checksums in BTRFS I guess a RAID is still better because it can protect against bitrot. And with directly connected systems in a RAID it is generally easier to ensure consistency.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Btw: With the regular Linux software mdraid, you can also swap drives without powering down. That all works fine while running. Unless your motherbard SATA controller craps out if in these cases. But the mdraid itself will handle it just fine.

        • Big_Boss_77@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Yeah, that’s generally my consensus as well. Just curious if someone had a better way that maybe I didn’t know about.

          • schizo@forum.uncomfortable.business
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            A tool I’ve actually found way more useful than actual raid is snapraid.

            It just makes a giant parity file which can be used to validate, repair, and/or restore your data in the array without needing to rely on any hardware or filesystem magic. The validation bit being a big deal, because I can scrub all the data in the array and it’ll happily tell me if something funky has happened.

            It’s been super useful on my NAS, where it’s the only thing standing between my pile of random drives and data loss.

            There’s a very long list of caveats as to why this may not be the right choice for any particular use case, but for someone wanting to keep their picture and linux iso collection somewhat protected (use a 321 backup strategy, for the love of god), it’s a fairly viable option.

            • Big_Boss_77@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Very cool, this is actually the sort of thing I was interested in. I’m looking at building a fairly heavy NAS box before long and I’d love to not have to deal with the expense of a full raid setup.

              For stuff like shows/movies, how do they perform after recovery?

              • OneCardboardBox@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                If you’re doing it from scratch, I’d recommend starting with a filesystem that has parity checks and filesystem scrubs built in: eg BTRFS or ZFS.

                The benefit of something like BRTFS is that you can always add disks down the line and turn it into a RAID cluster with a couple commands.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        With software raid, there is no controller to fail.

        Well, that’s not strictly true, because you still have a SATA/SAS controller, HBA, backplane, or whatever, but they’re more easily replaceable. (Unless it’s integrated in the motherboard, but then it’s not a separate component to fail.)

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      RAID is more likely to fail than a single disk. You have the chance of single-disk failure, multiplied by the number of disks, plus the chance of controller failure.

      This is poorly phrased. A raid with a bad disk is not failed, it is degraded. The entire array is not more likely to fail than a single disk.

      Yes, you are more likely to experience a disk failure, but like you said, only because you have more disks in the first place. (However, there is also the phenomenon where, after replacing a failed disk, the additional load during the rebuild might cause a second disk to fail, which is why you should replace failed disks as soon as possible. And have backups.)