New update: my current setup is a dell power edge t310 with 6x4tb SAS, zeon CPU, and 12gb ECC all parts stock. No hardware raid. 2.5gb network card. Should I just replace the 6 drives? With larger capacities? That will probably be more than $10/tb… I didn’t buy the 16 drives yet, they are used SAS drives 4tb each, turn to be about $40 each.

Current storage 8tb used out of 14… And lots of cold drives waiting to get copied… 10tb+ probably. Is it worth copying all the cold storage drives to the redundant nas.

Update: budget(200-600), the reason for the build is I found cheap 4tb drives for almost $10/Terabyte. So I want to use as much of them as I can

I am trying to build my final NAS build as a beginner.

I have a 6x4tb dell server, but it’s not enough.

I am currently trying to build the final boss of my nasses. 4x16tb with truenas with raid

I am unsure of what parts to buy as I am a complete beginner.

I found a case that can hold all 14 drives.

I need a motherboard, CPU, ram, PSU

I am on a budget, kind of.

What motherboard do you recommend? Pulled from a workstations with CPU and ram? A server board? Normal consumer with normal consumer CPU? Motherboard should have some pcie slots for 2 sata cards and one 2.5 GB card.

What CPU to run all these drives?

What ram and how much? 16? 32? Ecc, non ecc? Ddr4? Ddr3?

Power supply: 850w or more?

All parts should be able to support the 16 drives with headroom…

I would appreciate any help on this build, I want to build this as soon as possible.

Thanks

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      11 days ago

      I would seek the best price per terabyte while still allowing redundancy.

      • hesh@quokk.au
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        True, but I would factor in some kind of negative to cost/longevity from increasing number of drives. Even if 16x4 is a bit cheaper than 4x16 today, will it die faster?

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          11 days ago

          At these scales, I don’t think it’s measurable, if statistically significant at all.

          In any case, you should always be ready to replace a drive that fails. I buy used because they’re significantly cheaper (or at least they used to be) and I’ve never had any major failures.

          • Onomatopoeia@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 days ago

            And while more drives means more failure opportunity, it also means when a failed drive is replaced, it’s likely of a different manufacture period.

            I have a 5-drive NAS that I’ve been upgrading single drives every 6 months. This has the benefit of slowly increasing capacity while also ensuring drives are of different ages so less likely to fail simultaneously. (Now I’m waiting for prices to come back down, dammit).

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 days ago

    It’s better to buy 4x 16-20TB drives and expand storage instead of buying 16 4TB drives. Also 16 3.5 inch HDD drives draw around 200W of power alone.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    10 days ago

    I wouldn’t use more than 4 or 6 disks in a home environment. Specially with mechanical drivers, power consumption 24/7 would get me very worried.

    I run 4 x 8Tb SSDs, not cheap, but solid, low power AND low heat (even more important).

    Consider also heat dissipation as most likely at home you don’t have a constant temperature and humidity, so many spinning disks can suffer from heat, and that will kill them faster

    Longevity… With so much space I would expect to keep it running a decade or more… So factor in 10x365x24 hours of operation, energy consumed, heat dissipation and failure rate.

    On top of that, whatever gpu and ram you throw at it is meaningless, whatever wi work, even an Intel n100 NUC. Having enough cables and port instead… Well.

  • KairuByte@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    11 days ago

    Honestly, you might want to look into proper server hardware. There are many out there that support dozens of drives, assuming you’re willing to go with a blade. Even if you explicitly want a tower, server hardware is where you’re going to get the best support.

    You’ll most likely also want to increase the size of your drives. Assuming you’re being smart and utilizing RAID, you’re going to be losing a bunch of that storage.

    • B0rax@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      They already have the disks, they are looking for the rest of the build.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 days ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    NUC Next Unit of Computing brand of Intel small computers
    PSU Power Supply Unit
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    6 acronyms in this thread; the most compressed thread commented on today has 16 acronyms.

    [Thread #156 for this comm, first seen 11th Mar 2026, 21:50] [FAQ] [Full list] [Contact] [Source code]

  • BlackEco@lemmy.blackeco.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    What’s the case? Does it has the ability to hot-swap drives (even with a side panel off)? It can come really handy if one of your drives fails.

  • Onomatopoeia@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    Others have mentioned power - you may want to do some math on drive cost vs power consumption. There’ll be a drive size point that is worth the cost because you’ll use fewer drives which consume less power than more drives.

    Having built a number of systems, I’m a LOT more conscious of power draw today for things that will run 24/7. Like my ancient NAS draws about 15 watts at idle with 5 drives (It will spin down drives).

    More drives will always mean more power, so maybe fewer but larger drives makes sense. You may pay more up front, but monthly power costs never go away.

    Also, I’ve built a 10 drive n NAS like this (because I had the drives and the case, mono and ram). It can produce a lot if heat while doing anything, and it was a significant power hog - like 200w when running. And it really didn’t idle very well (I’ve run it with UnRaid, TruNAS and Proxmox).

  • Bloefz@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    Ehhh one thing I’ve learned over the years, it doesn’t matter how much storage I buy. Within a few weeks it’ll be full.

  • farcaller@fstab.sh
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    11 days ago

    You really want the ECC ram and the motherboard/cpu combo that supports it.

  • Seefra 1@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    I have never build a machine like that, so I guess I can’t help you much, but like another comment said, it seems like a pain to maintain, I usually have trouble with sata cables losing contact, with that setup there are many cables keen to lose contact.

    As for ram I wouldn’t worry about it at all, unless you use zfs 4GB should be more than enough, even 2 or less. Ram is expensive now, so you may want to consider using as little as possible unless you already have it laying around. Does truenas use zfs? If so you may want to use other fs like btrfs or test how well zfs works with the ram you have. I’m not sure zfs is worth the trouble. I wouldn’t buy extra ram.

    As for CPU I don’t think it matters much, but like I said, I have never tried your setup. But even an ancient sandy bridge should work fine if it’s just a personal has, with HDDs even with encryption. Works fine on my nas.

    Also, if you have access to free old computers you can try a ghetto setup where each each computer only handles 4 drives and then you join them together on a master computer either via nbd or nvme other Ethernet (works on sata too). But that seems like an even bigger pain to maintain and increases your power consumption by a lot.

  • Kairos@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    11 days ago

    Just in case you dont know most drives aren’t rated for this many in one case.