• 0 Posts
  • 51 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle
  • I’d like to politely disagree

    Finding alternatives to large software packages is great, don’t think I’m not saying that - but any time you have competitor X and competitor Y, be they both commercial, both F/OSS, or some combination thereof, the competitors must be cognizant of each other when setting up features.

    Burying your head in the sand and ignoring Microsoft, Apple, and Google is a very solidly Microsoft-Apple-Google-style play. It’s the play of someone who believes the other side offers no competition. That’s how you get unwieldy features these tech giants implement because they know they can make a 70% effort and people won’t be annoyed enough to leave.

    Every tool they make has a reason someone made it. Many tools are very important - for one example, the Microsoft Office document format is considered to be almost a universal format in presentations, spreadsheets, and plain documents for message passing between businesses.

    But as we as a society design alternatives to those various monopolies (as we should), we need users to want to use the new thing. We have to take what people like and keeps them on their old platform, and best preserve the intent of what they want on the new platform. Doing so requires discussing the features those big tech companies

    And as users, when we select the platforms we use, we need to weigh the cost of going with an alternative vs going with a giant. No solution is a perfect solution for everyone, and the chooser needs to weigh the maintenance cost (in hours or money) they will incur, how their users will like/dislike it, and maybe even look at a piece of software and decide “nah the vibes are off”.

    I’d love a world where those three tech giants had proper competition in all fields, and I think their business practices are scummy and need improvement. But the real alternatives to each need some polish before they’re ready to be used by [arbitrary tech illiterate grandmother].


  • Others have some good information here - all I’d like to add to the root is that Windows and Mac have a built-in DNS cache and it’s pretty straightforward to add a DNS cache to systemd distros (if it’s not already installed or in use) using systemd-resolved or dnsmasq if you really dislike systemd. Some distros enable this from install time.

    Systems that utilize a DNS cache will keep copies of DNS query results for a period of time, making the application-level name lookup speed essentially 0ms for a cached result. Cold results obviously incur the latency of the DNS server itself.




  • Far-UVC has a lot of potential once it’s scaled up. Right now, we’re still learning about best practices.

    Institutions should be adopting this tech at scale.

    If we’re still learning about best practices why are we talking about deploying this at scale? Self contradictory article…

    It should be the other way around. Figure out if it works academically, then test small scale, then scale up with proven and reproducible results. That’s how science works. Best practices can be formulated and adjusted at each stage as more knowledge is gained. That’s how we don’t make a massive health mistake and give an entire convention center indoor sunburns. Especially for people who might be more sensitive to sunburns.





  • TLDR: probably a lot of people continue using the thing that they know if it just works as long as it works well enough not to be a bother.

    Many many years ago when I learned, I think the only ones I found were Apache and IIS. I had a Mac at the time which came pre installed with Apache2, so I learned Apache2 and got okay at it. While by release dates Nginx and HAProxy most definitely existed, I don’t think I came across either in my research. I don’t have any notes from the time because I didn’t take any because I was in high school.

    When I started Linux things, I kept using Apache for a while because I knew it. Found Nginx, learned it in a snap because the config is more natural language and hierarchical than Apache’s XMLish monstrosity. Then for the next decade I kept using Nginx whenever I needed a webserver fast because I knew it would work with minimal tinkering.

    Now, as of a few years ago, I knew that haproxy, caddy, and traefik all existed. I even tried out Caddy on my homelab reverse proxy server (which has about a dozen applications routed through it), and the first few sites were easy - just let the auto-LetsEncrypt do its job - but once I got to the sites that needed manual TLS (I have both an internal CA and utilize Cloudflare’ origin HTTPS cert), and other special config, Caddy started becoming as cumbersome as my Nginx conf.d directory. At the time, I also didn’t have a way to get software updates easily on my then-CentOS 7 server, so Caddy was okay-enough, but it was back to Nginx with me because it was comparatively easier to manage.

    HAProxy is something I’ve added to my repertoire more recently. It took me quite a while and lots of trial and error to figure out the config syntax which is quite different from anything I’d used before (except maybe kinda like Squid, which I had learned not a year prior…), but once it clicked, it clicked. Now I have an internal high availability (+keepalived) load balancer than can handle so many backend servers and do wildcard TLS termination and validate backend TLS certs. I even got LDAP and LDAPS load balancing to AD working on that for services like Gitea that don’t behave well when there’s more than one LDAPS backend server.

    So, at some point I’ll get around to converting that everything reverse proxy to HAProxy. But I’ll probably need to deploy another VM or two because the existing one also has a static web server and I’ve been meaning to break up that server’s roles anyways (long ago, it was my everything server before I used VMs).




  • On/off:
    I have 5 main chassis excluding desktops. Prod cluster is all flash, standalone host has one flash array, one spinning rust array, NAS is all spinning rust. I have a big enough server disk array that spinning it up is actually a power sink and the Dell firmware takes a looong time to get all the drives up on reboot.

    TLDR: Not off as a matter of day/night, off as a matter of summer/winter for heat.

    Winter: all on

    Summer:

    • prod cluster on (3x vSAN - it gets really angry if it doesn’t have cluster consistency)
    • NAS on
    • standalone server off, except to test ESXi patches and when vCenter reboots cause it to be WoL’d (vpxd sends a wake to all stand by hosts on program init)
    • main desktop on
    • alt desktops off

    VMs are a different story. Normally I just turn them on and off as needed regardless of season, though I will typically turn off more of my “optional” VMs to reduce summer workload in addition to powering off the one server. Rough goal is to reduce thermal load as to not kill my AC as quickly which is probably running above its duty cycle to keep up. Physical wise, these servers are virtualized so this on/off load doesn’t cycle the array.

    Because all four of my main servers are the same hypervisor (for now, VMware ESXi), VMs can move among the prod cluster to balance load autonomously, and I can move VMs on or off the standalone host by drag-and-drop. When the standalone host is off, I usually move turn it’s VMs off and move them onto the prod cluster so I don’t get daily “backup failure” emails from the NAS.

    UPS: Power in my area is pretty stable, but has a few phase hiccups in the summer. (I know it’s a phase hiccup because I mapped out which wall plus are on which phase, confirmed with a multimeter than I’m on two legs of a 3-phase grid hand-off, and watched which devices blip off during an event) For something like a light that will just flicker or a laptop/phone charger that has a high capacitance, such blips are a non issue. Smaller ones can even be eaten by the massive power supplies my Dell servers have. But, my Cisco switches are a bit sensitive to it and tend to sing me the song of their people when the power flickers - aka fan speed 100% boot up whining. Larger blips will also boop the Dell servers, but I don’t usually see breaks more than 3-5m.

    Current UPS setup is:

    • rack split into A/B power feeds, with servers plugged into both and every other one flipped A or B as it’s primary
    • single plug devices (like NAS) plugged into just one
    • “common purpose” devices on the same power feed (ex: my primary firewall, primary switches, and my NAS for backups are on feed A, but my backup disks and my secondary switches are on feed B)
    • one 1500VA UPS per feed (two total) - aggregate usage is 600-800w
    • one 1500VA desktop UPS handling my main tower, one monitor, and my PS5 (which gets unreasonably upset about losing power, so it gets the battery backup)

    With all that setup, the gauges in the front of the 3 UPSes all show roughly 15-20m run time in summer, and 20-25m in winter. I know one may be lower than displayed because it’s battery is older, but even if it fails and dumps it’s redundant load onto the main newer UPS I’ll still have 7-10m of battery at worst case and that’s all I really need to weather most power related issues at my location.



  • computergeek125@lemmy.worldtohomelab@lemmy.mlDell Boss N1 questions
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Having used Dell BOSS S1 cards even in other Dell servers, there is firmware integration limitations to be seen. Even an R730 won’t fully tolerate an R740’s BOSS - those I’ve only seen work in the R740 and higher. The control interface for the S1 (and presumably S2) cards is integrated into a menu system in Dell’s BIOS and iDRAC.

    I know specifically for the BOSS-S1, there’s a Startech board that has a similar form factor and uses the same SATA RAID chipset, but without the Dell firmware. That Startech board works in non-Dell servers and workstations and has mostly the same features as the S1. (I 100% used my laptop’s eGPU to set them up a few times. It also definitely causes Windows to BSOD a few times because it doesn’t know how to eject an entire disk controller, but that’s also entirely my fault). The Startech controller has not really given me any major problems once I got it up, and has run in my R730s with near 100% uptime for a few months now.

    You may look to see if Startech has an NVMe version now to find a counterpart for the BOSS-N1 - I haven’t checked recently.

    Something else to consider is what your RAID array will actually be doing: M.2 SATA may be fast enough to be a boot disk, while your “real” data array uses SAS or NVMe to get to the CPU. You can even elect to use something fancier like Ceph or ZFS to handle the real data disks without a hardware RAID card. If you’re just booting the server hypervisor and maybe a low level agent VM or two and the real data is on another array, that Startech card may be for you. (You just need FreeDOS to re flash it to EFI mode)



  • Full extension rails are probably best going to come from the original vendor as a general principle, rather than attempting to use universal rails.

    If you have a wall mounted rack, unless your walls are not drywall, physics is working against you. It’s already a pretty intense heavy cantilever, and putting a server in there that can extend past the front edge is only going to make that worse.

    If you want to use full extension rails, you should get a rack that can sit squarely on the floor on either feet or appropriately rated casters. You should also make sure your heaviest items are on the bottom ESPECIALLY if you have full extension rails - it will make the rack less likely to overbalance itself and tip over when the server is extended.


  • Somewhat halfway between practical use and just messing around for fun.

    Several years ago I built a GPS NTP clock out of an RPi3 and an Adafruit GPS hat. Once I had the PPS driver installed, it’s precision/drift got pretty good. According to its own self measurements, I got pretty dang close to NIST stratum 1 NTP servers, but those are hundreds of miles away so that measurement isn’t super precise. It’s still running today, clocking nearly 24/7 operation since (checks shopping history) 2017, though I replaced the breadboard and mini module with a full sized hat with the same chipset in 2021.

    Recently I acquired a proper hardware GPS clock and I stacked the two against each other and found out my RPi did not half bad and can get between 0.5-10ms of the professionals (literally I’m pretty sure I’d need more precise measuring equipment to tell the difference between the two at this point than a regular computer). Now my homelab has fully redundant internet-disconneted stratum 1 time. Been half considering if I could write a GPSD driver for it as a joke, but I know upstream won’t accept it because it doesn’t offer SOOO many features they’d need.

    As for what else - I just kind of keep an eye out for projects related to GPS and high precision time, like the open source atomic PCI card that was released a few years ago. Finding out what people are doing to get better and better time is just downright interesting.

    Outside of the time world, it’s just fun to see what projects people come up with relating to maps and navigation. Stretch goal once I have enough server horsepower is to make a render-capable Open Street Map server with my home region loaded to start with, but eventually I’d like to get it to the point where I can load and process world.osm. That… Requires a LOT of CPU and SSD space.