So…in a short sentence…the title. I have a server in a remote location which also happens to be under CGNAT. I only get to visit this location once a year at best, so if anything goes off…It stays off for the rest of that year until I can go and troubleshoot. I have a main location/home where everything works, I get a fixed IP and I can connect multiple services as desired. I’d like to make this so I could publish internal servers such as HA or similar on this remote location, and reach them in a way easy enough that I could install the apps to non-tech users and they could just use them through a normal URL. Is this possible? I already have a PiVPN running wireguard on the main location, and I just tested an LXC container from remote location, it connects via wireguard to the main location just fine, can ping/ssh machines correctly. But I can’t reach this VPN-connected machine from the main location. Alternatively, I’m happy to listen to alternative solutions/ideas on how to connect this remote location to the main one somehow.

Thanks!

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    7 months ago

    Install Tailscale on the remote server and leave it up. Whenever you need to connect to it launch Tailscale on another device that you have access to, and you’ll be able to connect to the remote server at its Tailscale IP.

    Tailscale consists of a config tool called tailscale and a daemon called tailscaled. The daemon needs to be up for connectivity, and it will raise a network interface called tailscale0 when it works. To connect/disconnect from the tailnet you say tailscale up or down. This is independent of whether the daemon runs or not – that’s a separate issue that’s usually dealt with by systemd or whatever service manager you use.

    Tailscale doesn’t need public IPs because all the clients connect outward to a pairing server, which uses STUN to negociate direct connections between the nodes. The connection keys always stay with each client machine, the established connections cannot be snooped by Tailscale, and the clients are FOSS to make sure of that.

    If by any chance the ISP of any node does aggressive UDP filtering STUN may not work and result in connections being relayed through a network of so-called DERP servers maintaned by Tailscale. These servers are reduced in number and locations so relayed connections will be bandwidth- and latency-limited. If STUN succeeds you’ll only be limited by each node’s internet connection.

    Tailscale can provide DNS names for the enrolled nodes if you want, but you can also assign fixed IPs to each node in the range 100.64.0.0/10. I’m not a huge fan of the provided DNS because it’s a bit invasive (works by replacing /etc/resolv.conf temporarily with a version that resolves via 100.100.100.100 on the tailnet, and integrating it with local DNS can be a chore as you can imagine). There’s an option for tailscale nodes to not accept this DNS.

    Make sure that services on the remote server that you want to access via Tailnet (also) bind to the Tailscale IP (or to 0.0.0.0).

    Should you mess up, so long as the Tailscale client is still up on the remote server and it has an internet connection you can still reach it by enabling the Tailscale “fake” ssh service, which works through the tailscale client rather than a real ssh daemon. But please read up on what it involves to have this fake ssh access available (you don’t want to have to issue a command on the remote server to enable it).