Safety Engineer, Dad, Husband, Pilot, Musician. Not necessarily in that order.

Ingenieur für funktionale Sicherheit, Vater, Ehemann, Pilot, Musiker. Nicht notwendigerweise in dieser Reihenfolge.

  • 3 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • I’m not touching that post again. But a small rant about typesetting in lemmy: It seems there is no way whatsoever to put angle brackets in a “code” section. In an overzealous attempt to prevent HTML injection, everything in angle brackets is just removed when posting (although it remains there in preview). In normal text, you can use “<”, but not inside “code” segments, where it will be retained verbatim.



  • If you’re as paranoid as me about data integrity, SAS drives on a host adapter card in “Initiator Target” (IT) mode with write-cache on the disks disabled is the safest. It will degrade performance when writing many small files concurrently, but not as badly as with SATA drives (that’s for spinning disks, of course, not SSD). With a good error-correcting redundant system such as ZFS you can probably get away with enabled write cache in most cases. Until you can’t.


  • RAID is generally a good thing but don’t get complacent, follow the 3-2-1 method

    To expand on that: Redundant drive setup and backups serve completely different purposes. The only overlap is in case of a single disk failure, where RAID (or similar) may save the data.

    Redundancy is all about reducing downtime in case of single hardware failures. Backups not only protect you from data loss in case of multiple simultaneous failures, but also from accidental deletion. Failures that require restoration of data almost always involve downtime. In short: You always need backups (unless it’s strictly a local cache, and easily recreatable), but if you want high availability, redundancy may help.

    3-2-1-rule for backups, in case you’re unfamiliar: 3 copies of important data, on 2 different media, with 1 off-site.





  • It's much more than a fan shroud. It's a baffle specifically designed to guide cooling air over the CPU heatsinks and the RAM modules. This kind of airflow design is very common in servers. I wouldn't trust it without, especially since the CPU heatsinks have no dedicated fans, but rely on the aerodynamic functioning of the baffle.

    And yes, I know they are very similar, in fact I am quite (but not absolutely) certain that they are identical except for the actual second CPU socket. It's almost as if you didn't read my post. Even the soldering points for the second CPU socket are there in the single-CPU T320. They certainly won't have different PSU connectors. They even share part numbers for the case.




  • I don’t think there’s anything intrinsically wrong, but far as I can see you are using only a single disk for the zfs pool, which will give you integrity checks (know when something is corrupted), but no way to fix it.

    Since this is, by today’s standards, a tiny disk at 100G, I assume this is just a test setup? I’m not sure zfs is particularly well suited for virtual machines, I think it is better to have the host handle the physical data integrity by having the disk image on a zfs filesystem, or giving the VM a zfs volume (block device) directly.




  • What are the advantages of raid10 over zfs raidz2? It requires more disk space per usable space as soon as you have more than 4 disks, it doesn’t have zfs’s automatic checksum-based error correction, and is less resilient, in general, against multiple disk failures. In the worst case, two lost disks can mean the loss of the whole pack, whereas raidz2 can tolerate the loss of any 2 disks. Plus, with raid you still need an additional volume manager and filesystem.



  • Yes. I use a G7 N36L as an offsite-backup server in my second apartment. Works great with NetBSD and zfs, using rsnapshot to make remote backups every night.

    Since it is only active for an hour and a half each night, it is my only server to put the disks into powersave mode the rest of the time. Computing eprformance is so low that I don’t even run a folding@home client. It usually cannot finish any work package before the deadline.




  • For large storage, ECC helps a lot for avoiding storage corruption. In combination with a redundant architecture in zfs it is almost bullet-proof. (Make no mistake, redundant storage is no substitute for backups! You still need those.)

    One option is to use comparatively old server hardware. I have some pretty old stuff (around 10 years) that uses DDR3 RAM, which is dirt cheap, even with ECC (somewhere around 1 €/GB). And it will be fast enough by far for most applications. The downside is higher power consumption for the same performance. The Dell T320 I have with eight 3.5" SAS disks and 32 GB RAM uses some 140 W of power, to give you a ballpark figure.


  • What’s your problem with DAVx^5? It’s completely and permanently free and fully-featured on f-droid. Only the PlayStore version costs money. The authors don’t want to make money, but motivate you to move away from Google infrastructure.

    If you only need address/phone number sync, then nextcloud is probably overkill, but I use it, and it works great. Also for calendar sync and file storage.

    (You don’t need to put the community name in the title, especially not with “@”, which signifies usernames. Communities are prefixed by “!”.)