• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • Looking at the diagram, I don’t see any issue with the network topology. And the power arrangement also shouldn’t be a problem, unless you require the camera/DVR setup to persist during a power cut.

    In that scenario, you would have to provide UPS power to all of: the PoE switch, the L3 switch, and the NVR. But if you don’t have such a requirement, then I don’t see a problem here.

    Also, I hope you’re doing well now.





  • The original reporting by 404media is excellent in that it covers the background context, links to the actual PDF of the lawsuit, and reaches out to an outside expert to verify information presented in the lawsuit and learned from their research. It’s a worthwhile read, although it’s behind a paywall; archive.ph may be effective though.

    For folks that just want to see the lawsuit and its probably-dodgy claims, the most recent First Amended Complaint is available through RECAP here, along with most of the other legal documents in the case. As for how RECAP can store copies of these documents, see this FAQ and consider donating to their cause.

    Basically, AXS complains about nine things, generally around: copyright infringement, DMCA violations (ie hacking/reverse engineering), trademark counterfeiting and infringement, various unfair competition statutes, civil conspiracy, and breach of contract (re: terms of service).

    I find the civil conspiracy claim to be a bit weird, since it would require proof that the various other ticket websites actually made contact with each other and agreed to do the other eight things that AXS is complaining about. Why would those other websites – who are mutual competitors – do that? Of course, this is just the complaint, so it’s whatever AXS wants to claim under “information and belief”, aka it’s what they think happened, not necessarily with proof yet.


  • Your primary issue is going to be the power draw. If your electricity supplier has cheap rates, or if you have an abundance of solar power, then it could maybe find life as some sort of traffic analyzer or honeypot.

    But I think even finding a PCI NIC nowadays will be rather difficult. And that CPU probably doesn’t have any sort of virtualization extensions to make it competitive against, say, a Raspberry Pi 5.


  • To lay some foundation, a VLAN is akin to a separate network with separate Ethernet cables. That provides isolation between machines on different VLANs, but it also means each VLAN must be provisioned with routing, so as to reach destinations outside the VLAN.

    Routers like OpenWRT often treat VLANs as if they were distinct NICs, so you can specify routing rules such that traffic to/from a VLAN can only be routed to WAN and nowhere else.

    At a minimum, for an isolated VLAN that requires internet access, you would have to

    • define an IP subnet for your VLAN (ie /24 for IPv4 and /64 for IPv6)
    • advertise that subnet (DHCP for IPv4 and SLAAC for IPv6)
    • route the subnets to your WAN (NAT for IPv4; ideally no NAT66 for IPv6)
    • and finally enable firewalling

    As a reminder, NAT and NAT66 are not firewalls.


  • just reuse old equipment you have around

    Fully agree. Sometimes the best equipment is that which is in-hand and thus free.

    you can just send vlan tagged traffic across a dumb switch no problem

    A small word of caution: some cheap unmanaged switches rigidly enforce 1500 Byte payload sizes, and if the switch has no clue that 802.1q VLAN tags even exist, will consider the extra 4 bytes as part of the payload. So your workable MTU for tagged traffic could now be 1496 Bytes.

    Most traffic will likely traverse that switch just fine, but max-sized 1500 Byte payload frames with a VLAN tag may be dropped or cause checksum errors. Large file transfers tend to use the full MTU, so be aware of this if you see strange issues specific to tagged traffic.


  • I only have experience with Mellanox CX-5 100Gb cards at work, but my understanding is that mainline Linux has good support for the entire CX lineup. That said, newer kernel versions – starting at maybe 5.4? – will have all sorts of bug fixes, so hopefully your preferred distro has built with those driver modules included, or loadable.

    As for Infiniband (IB), I think you’d need transceivers with specific support for IB. That Ethernet and IB share the (Q)SFP(+) modular connector does not guarantee compatibility, although a quick web search shows a number of transceivers and DACs that explicitly list support for both.

    That said, are you interested in IB fabrics or what they can enable? One use-case native to IB is RDMA, but has since been brought to – so called “Converged” – Ethernet in the form of RoCE, in support of high-performance storage technologies like SPDK that enable things like NVMe storage over the network.

    If all you’re looking for are the semantics of IB, and you’re only ever going to have two nodes that are direct-attached, then the Linux fabric abstractions can be used the same way you’d use IB. The debate of Converged Ethernet (CE) vs IB is more about whether/how CE switches can uphold the same guarantees that an IB fabric would. Direct attachment avoids these concerns outright.

    So I think perhaps you can get normal 40 Gb Ethernet DACs to go with these, and still have the ability to play with fabric abstractions atop Ethernet (or IP if you use RoCE v2, but that’s not available on the CX-3).

    Just bear in mind that IB and fabrics in general will get complicated very quickly, because they’re meant to support cluster or converged computing, which try to make compute and storage resources uniformly accessible. So while you can use fabrics to transport a whole NVMe namespace from a NAS to a client machine with near line-rate performance, or set up some incredible RPC bindings between two machines, there may be a large learning curve to achieve these.





  • For wireless APs, Ubiquiti equipment is fairly well-priced and capable for prosumer gear, although I’m beginning to be less enthralled with the controller model for APs. They also can operate on 48vdc passive power, or 802.3af/at PoE, which might work nicely if you have a compatible switch.

    I’ve heard from colleagues running Plex on Proxmox that core count is nice, except when doing transcoding, where you either want high single-corr performance or a GPU to offload. So an AMD Epic CPU might serve you well, if you can find one of the cheap ones being sold off from all the Chinese data centers on eBay.

    Now with that said, have you considered deploying against existing equipment, and then identify deficiencies that new hardware would fix? That would certainly be the fastest way to get set up, and it lets you experiment for cheap, while waiting for any deals that might pop up.


  • I recall watching a documentary (on Curiosity Stream maybe? I’m no longer subscribed) on data storage longevity. It covered DNA storage, which I think this PBS video w/ transcript provides more recent coverage of its developments. As well as holographic storage, which I could only find the Wikipedia page for.

    As for which one I think might be the future, it’s tough to say. Tape is pretty good and cheap but slow for offline storage. Archival media will probably end up all being offline storage, although I could see a case for holographic/optical storage being near line. Future online storage will probably remain a tough pickle: cheap, plentiful, fast; select at most two, maybe.


  • Similar to your modem case, the fibre ONT on the side of my house is now PoE powered, because it would otherwise need two pairs from the CAT6 cable to provide 12v to itself, from a backup battery supply inside the house. Replacing that supply with PoE, this allowed me to centralize my network stack’s power source, so that a single UPS in my networking closet can power that ONT. It also reflects the reality that if my PoE switch goes down, my network is hosed anyway. There was also the issue that with only two remaining pairs, it would be impossible to realize 1 Gbps on the CAT6.

    I also have PoE to the RPi1 units which attach to my TVs. These serve as set-top boxes with interactivity with CEC via the TV’s HDMI port, and are PoE because I insist on all my devices being wired rather than on WiFi, so might as well provide power as well. These use a microUSB PoE splitter, because 1) the RPi PoE hats mean I can’t fit into standard RPi cases, and 2) the PoE hat runs very hot and makes a high frequency squeal, which was unacceptable in this application.

    Power cycling via SNMP on the switch is another nice benefit to having stuff PoE powered. In fact, I have one more application which depends on this behavior. I have a blade server which sits in my garage, that would otherwise consume a lot of standby power when I don’t need it. To fix that, a 240vac relay with 12vdc control coil sits ahead of it, so activating the relay turns on the blade server. That relay is powered by PoE, commanded by the switch, so whenever I want the blade server, it’s only an SNMP command away. iDRAC then communicates over the network using that same CAT6 that’s powering the relay, again recognizing the dependency that if PoE fails, the blade server is down anyway.

    I’m only using 802.3at power levels right now, as that’s all my switch can do. If I ever acquire an 802.3bt switch, I might consider PoE lighting or PoE phone chargers, or silly things like that. There’s a lot that can be done with 60ish Watts. Note that the efficiency of PoE switches tend to be abysmal when lightly loaded.


  • If the server is sent a signal to shutdown due to a grid outage, who is telling it the grid was restored?

    Ah, I see I forgot to explain a crucial step. When the UPS detects that grid power is lost, it sends a notification to the OS. In your case, it is received by apcupsd. What happens now is a two step process: 1) the UPS is instructed to power down after a fixed time period – one longer than it would take for the OS to shut down, and 2) the OS is instructed to shut down. Here is one example of how someone has configured their machine like this. The UPS will stay off until grid power is restored.

    In this way, the server will indeed lose power, shortly after the OS has already shut down. You should be able to configure the relevant delay parameters in apcupsd to preserve however much battery state you need to survive multiple grid events.

    The reason the UPS is configured with a fixed time limit – as opposed to, say, waiting until power draw drops below some number of watts – is that it’s easy and cheap to implement, and it’s deterministic. Think about what would happen if an NFS mount or something got stuck during shutdown, thereby running down the battery, ending up with the very unexpected power loss the UPS was meant to avoid. Maybe all the local filesystems were properly unmounted in time, but when booting up later and mounting the filesystems, a second grid fault and a depleted battery state could result in data loss. Here, the risk of accidentally cutting off the shutdown procedure is balanced with the risk of another fault on power up.


  • Answering the question directly, your intuition is right that you’ll want to limit the ways that your machine can be exploited. Since this is a Dell machine, I would think iDRAC is well suited to be the control mechanism here. iDRAC can accept SNMP commands and some newer versions can receive REST API calls.

    But stepping back for a moment, is there any reason why you cannot configure the “AC Power Recovery” option in the system setup to boot the machine when power is restored? The default behavior is to remain as it was but you can configure it to always boot up.

    From your description, it sounds like your APC unit notifies the server that the grid is down, which results in the OS shutting down. Ostensibly, the APC unit will soon diminish its battery supply and then the r320 will be without AC power. When the grid comes back up, the r320 will receive AC power and can then react by booting up, if so configured. Is this not feasible?



  • Did y’all mean to say milliseconds, and not microseconds? Sub-millisecond power loss would be less time than one AC cycle, whether 50 or 60 Hz.

    Anyway, I do recall seeing some enterprise gear specifying operation through a drop in AC power lasting two cycles, precisely to cover the switch to UPS power, at least for 60 Hz power. So up to 33 milliseconds. A cursory search for hybrid inverters online shows a GroWatt with “<20ms” switchover, so this may be fine for servers and switches, when the inverter is operated without any solar panels.

    For consumer grade equipment, all bets are off; some cheaper switch-mode power supplies do very weird things under transient conditions.


  • I second this idea, if it’s feasible. As noted elsewhere in this thread, the lead-acid batteries in UPS units have a limited lifespan, even if not regularly drained. Solar and off-grid enthusiasts have determined that parity between overall lifetime cost of lead-acid versus lithium batteries was reached years ago, and now it’s firmly in lithium’s favor, mostly due to the greater number of recharge cycles.

    Contraindications for lithium batteries would include:

    • high local costs for lithium battery packs
    • lack of space for the hybrid inverter, as they’re usually not rack-mountable
    • the homelab drops below 0 C (32 F), in the specific case of LiFePO4 cells

    That said, breathing life into old equipment is usually more environmentally friendly than acquiring new equipment.