I’ve got Immich working great on Unraid, but if I’m on my network I can’t really use it. Just fails to resolve the dns. I looked it up and it’s that my router doesn’t support hairpin or something. It’s a Aginet hb810. I found a workaround in the Immich client where you can add a second entry that’s network specific, but it doesn’t seem to work very reliably.

What are my options?

  • spaghetti_carbanana@krabb.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    You’ve got two options I can think of:

    1. As others have eluded, split DNS. You need something handling DNS resolution internally that allows you to add custom records. You’ll need to add a record of type “A” pointing to the internal IP where Immich sits.

    2. Since you have Immich published to your public IP, you can use hairpin NAT. This is something that is a lucky dip with routers as to whether it works or not and only some make it configurable. This will allow you to hit Immich via public IP and the router will “hairpin” the traffic out to the WAN interface and back in. This is how I do it so I don’t make a spaghetti mess of DNS records.

    Failing to resolve DNS doesn’t sound like this is actually the problem though. Do you have a domain registered and DNS records pointing to your public IP? Does it resolve fine outside your network? If yes, then something may be wrong on your internal network’s DNS resolution.

    Also worth noting, if you only just created the records in public DNS then tried to resolve it straight away, they will not have propagated yet and your DNS resolver will cache the “record doesn’t exist” result for some time (most I’ve seen is a couple of hours).

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    2 days ago

    Opinionated wall of text incoming:

    Hairpin is an annoying hack. It happens and is necessary when you are getting a public IP from DNS but the service is not actually listening on that address itself, because the router owns that address, and that address is actually on the other side of the router, the public-facing side, not the side you’re on, so now you have to go out to the public side of the router, turn around (the hairpin turn) and come back in as if you were a public user in order to get to a service that was literally sitting right next to you (on the private side) the whole time.

    What you should do instead: You should have your own internal DNS for your own personal network that resolves the DNS properly within the context of that network. This avoids needing to use hairpin at all, because your traffic is never trying to go out to the public internet in the first place. If you get the correct, context-specific best path IP to your services at all times, you don’t have to use the naive, public IP for immich that doesn’t even actually exist on your local network.

    The terminology around all this is confusing and sometimes stupid because private networks behind NAT never really existed when DNS was invented, and a lot of people deal with it in stupid and overcomplicated ways. If this same DNS server were then also going to be shared and used publicly to host your own domain names to other people, you would need a thing called “split zones” or “split DNS” but you don’t need to do that and you should avoid that too. Keep private DNS private, and leave public DNS out in public. Separate them intentionally and deliberately.

    If you are getting the public IP for your Immich, then you are using its public DNS. I will try and make it simple for you, the way I think everyone should do it:

    Your LAN/VPN environment is private. It should have its own dedicated authoritative private DNS server whose purpose is limited to completely and comprehensively servicing all the DNS IP lookup needs of that LAN/VPN environment and being the sole source of truth within that network. Often, the local network’s DNS is already correctly configured and provided by your router to handle all public IPs, and this is usually completely fine for self-hosting. What matters is that you should be able to add custom IP addresses to it, and it should be private to your network. Nobody else should have access to this DNS configuration, not because it’s really important for security but just because it is irrelevant outside the context of your local network, which is usually exactly what your router DNS provides. Your internal network DNS is responsible for two things within that environment:

    • Providing the correct local (private) IP address for traffic to local services that belongs only on your network.
    • Providing official public IP addresses for any OTHER things outside that local network like any normal DNS server (typically accomplished by DNS forwarding to your ISP’s servers or Google/Cloudflare DNS servers, or resolving it from the root servers directly, called recursive DNS) Almost any router or DNS server will do this by default, if you’re accessing Lemmy you’re getting Lemmy’s public IP from your router’s DNS configuration.

    You just have to implement and maintain the first part, usually in your router’s configuration. If you want more control or consistency over the DNS your local network is using it can also be self-hosted with something small like dnsmasq, or even big old granddaddy bind/named (not as complex as it seems and very standardized). Either way, that’s your responsibility, and once you’re providing correct local IPs for Immich on your local DNS (outside your network, you and the public will still use public DNS and get the public IP) everything will just work.

    Hairpin may feel convenient. It’s not, it’s a workaround for a misconfiguration. Having private DNS that is separate and distinct from public DNS may feel like duplication of effort, but it’s not, it’s fundamental to even having a local network and puts you in the drivers seat for the layout of that network. Take responsibility for it instead of letting hairpin fix your mess.

    • Amju Wolf@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      I agree it’s a stupid hack, but there are good reasons to use public addresses in your local environment too: for example you’ll need it for any roaming device like a laptop or a phone. It also vastly simplifies certificate management where you can just use sour existing publicly valid certs to access your services.

      The only proper solution would probably be ipv6, but that’s not trivial either.

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        You can do all those things with proper routing and there is no difference from mobile devices (as long as they use DHCP and what mobile device wouldn’t?). What I’m suggesting does not change anything on the public side. You still authenticate publicly to renew your certificates. You still give the same certificates to both public and local networks. They’re still valid. Nothing changes.

        The only difference is that when you’re local, your DNS gives you the correct local IP address where that service is hosted, say, 192.168.12.34 instead of using public DNS, getting an external IP that’s on the wrong side of the router, and having to go outside your own network and come back in. Hairpin is like that simpsons episode where Abe goes in the revolving door, takes off his hat, puts his hat back on, and goes back out the same revolving door in the span of 2 seconds. It’s pointless, why are you doing that? If you didn’t want to be on the outside of the network, why are you going to the outside of the network first? Just stay inside the network. Get the right IP. No hairpin routing needed. No certificate madness needed. Everything just works the way its supposed to (because this is in fact the way it’s supposed to work)

    • tko@tkohhh.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      I super appreciate where you’re coming from on this. Unless I’m mistaken, NAT port forwarding makes this not quite so clean. If my (internally hosted) site is published on ports other than 80/443, is there any way to route them internally without needing to include the port in the request?

      If not, then I either have to include the port in my request when I’m inside the LAN, or I need to set up a macvlan in my docker network to facilitate a LAN IP and standard ports.

      Do I have that right?

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        15 hours ago

        I’d argue that your internally hosted site should not be published on ports other than 80/443. Published is the key word here, because the sites themselves can run on whatever port you want and if you want to access them directly on that port you can, but when you’re publishing them and exposing them to the public you don’t want to be dealing with dozens of different services each implementing their own TLS stack and certificate authorities and using god-knows-what rules for security and authentication. You use a proxy server to publish them properly. And there’s no reason you can’t or shouldn’t use that same interface internally too. Even though you technically might be able to directly access the actual ports the services are running on on your local network, you really probably shouldn’t, for a lot of reasons, and if you can, maybe consider locking that down and making those services ONLY listen on 127.0.0.1 or isolated docker networks so nothing outside the proxy host itself can reach them.

        If you don’t want your services to listen on 80/443 themselves that’s reasonable and good practice, but something should be, and it should handle those ports responsibly and authoritatively to direct incoming traffic where it needs to go no matter the source. Even if (or especially if) you need to share that port with various other services for some reason, then either way you need it to operate that port as a proxy (caddy, nginx, even Apache can all do this easily). 443 is the https port, and in the https-only world we should all be living in, all https traffic should be using that port at least in public, and the https TLS connection should be properly terminated at that port by a service designed to do this. This simplifies all sorts of things, including domain name management and certificate management.

        tl;dr You should have a proxy that publishes all your services on port 443 according to their domain name. When https://photos.mydomain.com/ comes in, it hits port 443 and the service on port 443 sees it’s looking for “photos”, handles the certificates for photos, and then decides that immich is where it is going and proxies it there, which is none of anyone else’s business. Everyone, internal or external, goes through the same, consistent, and secure port 443 entrance to your actual web of services.

        • tko@tkohhh.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          15 hours ago

          The challenge here is that the host is Unraid, which publishes its own interface on 80/443. My reverse proxy is of course handling all requests for my sites, but that is ALSO running on a container, and must be listening on something other than 80/443 when using host or bridge networking.

          So, if I’m following along correctly, I would need to put my reverse proxy on a different host (bare metal or VM) in order for it to listen on 80/443.

          • cecilkorik@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            11 hours ago

            I’m not too familiar with unraid but from a little research I just did it seems like you’re right. That does seem like a really unfortunate design decision on their part, although it seems like the unraid fans defend it. Obviously, I guess I cannot be an unraid fan, and I probably can’t help you in that case. If it were me, I would try to move unraid to its own port (like all the other services) and install a proxy I control onto port 443 in its place, and treat it like any other service. But I have no idea if that is possible or practical in unraid. I do make opinionated choices and my opinion is that unraid is wrong here. Oh well.

            • tko@tkohhh.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              Totally fair… I appreciate you engaging with me, your perspective is appreciated! I won’t defend Unraid’s choice when it comes to the UI ports, but I will simply say that there are things that are really nice about Unraid from a usability standpoint.

              Thanks again for your thoughts!

              • cecilkorik@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 hours ago

                Convenience is great until it becomes inconvenient. But that’s a journey we all make :)

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    The term you would search for here is “split-horizon DNS”. Assuming you’re using a real domain name with hosts, you want a DNS server inside that resolves the LAN address, and the outside DNS server for everyone else resolves your WAN address (which presumably you reverse-proxy to inside host).

    Even better is to not expose the service at all from the outside, use a VPN like Tailscale, and then use their MagicDNS service on the tailscale network to keep everything behind the firewall.

    Every service you expose to the outside is more attack surface.

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      On my unraid router, this is called DNS override

      Immich.example.ca resolves to a local ip when you search for within the network. For every DNS entry on cloudflare for my domain, I have an equivalent one on my router and pihole that points to the local domain

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago

    On your LAN DNS server (say, pi-hole), you could add an A record for your Immich’s domain name that points to the internal IP address so clients on your LAN would simply resolve the LAN IP instead of trying to do fancy NATing. Make sure your browser doesn’t try to do DNS over HTTPS, which would skip your local dns.

    Or you could run everything on a meshVPN like Tailnet. That way the (VPN) IP of the Immich server doesn’t change and the Tailnet will route the traffic over your LAN when your clients are local.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Yeah, setup a pi-hole container/server to do DHCP and disable it on your router. The documentation should cover it, but you have to use network_mode: host in order for it to do DHCP.

        You can then add an A record entry for your Immich server’s domain name pointing to the LAN IP and so any device on your LAN will resolve its domain to the LAN IP.

        You also get pi-hole DNS filtering/adblock and, probably, a larger DNS cache than what the router provides.

  • grehund@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Are you interested in the networking side of self hosting? If so, you should get a better router, something you can run OPNsense or similar on. There are other “options”, but they’re workarounds that avoid fixing the real problem.

  • Buck@jlai.lu
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    That’s what made me install Adguard Home, just so that I could bypass my hairpin DNS issue. There are still things that don’t work and I haven’t found the time to fix those, but for me at least, Immich works the same inside and outside the house! (My gf uses /e/OS and her DNS overrides Adguard Home which is a shame, but that’s in the list of “doesn’t quite work perfectly”)

      • Buck@jlai.lu
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I don’t think it’s a no go, but since I haven’t figured out DNS over HTTPS for my AGH instance, I don’t want to replace her default DNS to another.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    Okay so… How do you have it set up and configured? You’ve given us nothing to go on.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    15 hours ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    IP Internet Protocol
    NAT Network Address Translation
    SSL Secure Sockets Layer, for transparent encryption
    TLS Transport Layer Security, supersedes SSL
    VPN Virtual Private Network
    nginx Popular HTTP server

    8 acronyms in this thread; the most compressed thread commented on today has 17 acronyms.

    [Thread #227 for this comm, first seen 9th Apr 2026, 15:10] [FAQ] [Full list] [Contact] [Source code]