• addie@feddit.uk
    link
    fedilink
    English
    arrow-up
    13
    ·
    18 days ago

    Assuming that these have fairly impressive 100 MB/s sustained write speed, then it’s going to take about 93 hours to write the whole contents of the disk - basically four days. That’s a long time to replace a failed drive in a RAID array; you’d need to consider multiple disks of redundancy just in case another one fails while you’re resilvering the first.

    • AmbiguousProps@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      18 days ago

      This is one of the reasons I use unRAID with two parity disks. If one fails, I’ll still have access to my data while I rebuild the data on the replacement drive.

      Although, parity checks with these would take forever, of course…

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 days ago

      That’s a pretty common failure scenario in SANs. If you buy a bunch of drives, they’re almost guaranteed to come from the same batch, meaning they’re likely to fail around the same time. The extra load of a rebuild can kill drives that are already close to failure.

      Which is why SANs have hot spares that can be allocated instantly on failure. And you should use a RAID level with enough redundancy to meet your reliability needs. And RAID is not backup, you should have backups too.

    • C126@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 days ago

      2 parity is standard and should still be adequate. Likelihood of two failures within four days on the same array is small.

    • DaPorkchop_@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 days ago

      My 16TB ultrastars get upwards of 180MB/s sustained read and write, these will presumably be faster than that as the density is higher.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 days ago

        I’m guessing that only works if the file is smaller than the RAM cache of the drives. Transfer a file that’s bigger than that, and it will go fast at first, but then fill the cache and the rate starts to drop closer to 100 MB/s.

        My data hoarder drives are a pair of WD ultrastar 18TB SAS drives on RAID1, and that’s how they tend to behave.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          16 days ago

          This is for very long sustained writes, like 40TiB at a time. I can’t say I’ve ever noticed any slowdown, but I’ll keep a closer eye on it next time I do another huge copy. I’ve also never seen any kind of noticeable slowdown on my 4 8TB SATA WD golds, although they only get to about 150MB/s each.

          EDIT: The effect would be obvious pretty fast at even moderate write speeds, I’ve never seen a drive with more than a GB of cache. My 16TB drives have 256MB, and the 8TB drives only 64MB of cache.