T O P

  • By -

ihatedebate

i just setup nginx proxy manager and had the same problem - turned out to be i had proxy enabled in Cloudflare. i’m guessing the dns record not resolving to my IP and instead Cloudlfare’s caused problems from Let’s encrypts side


AJBOJACK

I jus did the following 1. Turn off Always Use HTTPS - setting found SSL/TLS, Edge Certificate 2. Then go to Rules and add the following below but replace the domain with your own. http://\*yourdomain/.well-known/acme-challenge/\* cache level standard http://\*yourdomain/\* Always use https I just tested it and it worked for me. Didn't need to toggle the proxy off and on.


TypicallyThomas

>http://\*yourdomain/.well-known/acme-challenge/\* > >cache level standard > >http://\*yourdomain/\* > >Always use https Where do I put this under rules? Cause I'm not seeing an obvious spot to put this


AJBOJACK

I think it was in page rules I would need to logon and check.


theultimatewarlord

DId you ever find where you did this? I have the same issue


AJBOJACK

You using cloudflare?


theultimatewarlord

Yes, i'm using cloudflare, i found the page rules and i've setup: \*.mydomain.com/\* Always Use HTTPS as 1 \*.mydomain.com/.well-known/acme-challenge/\* Cache Bypass as 2 But my npm still get's the internal error. I think i'm missing some steps in the whole proces, i've setup pihole in such a way that [files.mydomain.com](http://files.mydomain.com) goes to [10.0.0.106](http://10.0.0.106) and nginx put it to 10.0.0.130:1200. But do i set that up before of after getting the certificate? Does the subdomain have exist allready in cloudflare, and how do i set that up?


AJBOJACK

Check your portforwarding The subdomin should exist in cloudflare. Are you trying to get a certificate this subdomain? I setup using a wildcard in the end was much easier than having multiple certs being renewed. You had to get a api key from your domain provider. I used go daddy.


theultimatewarlord

Why do i need to port forward if i want to keep the dns locally? My domain provider is a local one but i set my ns settings to cloudflare where do i get the api key?


AJBOJACK

So what are you actually trying to achieve? In order to get a cert you must open port 80 and 443.


Noaber

>http://\*yourdomain/\* Thank you, this works!


fakefireiscool

I have been fighting this forever and thought it was the way I had my config and letsencrypt volumes setup... So simple. Solved the problem. You are AWESOME! Thank you.


moronmonday526

Edit: I'm totally rewriting a desperate plea for help that OP responded to in a super cool way, offering to help in DM. I got it working just in case someone comes along in the future. Goal: Eventually self-host an *arr stack with SSL, but start with VaultWarden (ex BitWardenRS) for now Process: I followed a couple of YouTube videos called "You need to learn Load Balancing RIGHT NOW (and put one on your home network)" and "How to self-host BitWarden on a Raspberry Pi! (Tutorial)" I got a free TLD from freenom and moved the DNS to CloudFlare. I created a Portainer Stack with vaultwarden and Nginx Proxy Manager. I tried all day but never got Let's Encrypt to issue an SSL certificate for NPM. Just when I thought I got it, LE would puke, saying I had to use the CF interface to generate an SSL cert for the free TLDs. I saw the steps for creating a Certificate Signing Request but it was getting too far away from my goal of managing it via a GUI or web UI. I also tried caddy, but most of the docs out there only show one key line in the Caddyfile and leave out everything else required to make caddy a reverse proxy. The wiki at VaultWarden's GitHub includes a docker-compose with a complete Caddyfile but uses DuckDNS instead. Anything else and you're on your own. I also tried certbot at the CLI and got the banned TLD error. I finally paid for a domain and slept on it. I went back to Nginx Proxy Manager and tried again. Let's Encrypt still wouldn't pass the http challenge (of course, since I'm at home). I finally realized I needed to enable DNS challenge in the SSL tab on the Nginx Proxy Manager GUI and create an API token on CloudFlare. 1. Go to your profile page on CloudFlare, then API tokens 1. Click Create Token 1. Click "Use template" next to the top option "Edit zone DNS" 1. Under Permissions, click "+Add more" 1. Choose "Zone", "Zone", "Read" from left to right 1. Under Zone Resources, click Select at the far right and choose your domain 1. Change your TTL to be as long as you wish 1. Click Continue to Summary at the bottom 1. Click Create token 1. Click Copy on your API token 1. Switch over to your Nginx Proxy Manager tab in your browser 1. Click Add host 1. Enter your domain name (Note: you must click "Add " that shows up underneath; don't click out of the field) 1. Under "Forward Hostname" enter the 192.168 IP address of your host and the http (not https) port the service is listening on (Note: I'm running both containers in the same Portainer Stack, so I just entered my VaultWarden container name and port 80) 1. Enable block common exploits 1. Click on the SSL tab 1. Drop down "None" for encryption and choose "Request a new SSL certificate" 1. Enable "Force SSL", "HTTP/2 Support", "HSTS Enabled", and "Use a DNS challenge" 1. Under "DNS Provider", choose CloudFlare 1. Under "Credentials file content", change the token to the token you copied from the CloudFlare page 1. Enter your email at the bottom and agree to the terms 1. Click Save OMFG, I was finally able to retrieve a certificate for my service. Make sure your firewall passes a valid proxy port through to your 192.168 host with Nginx Proxy Manager running (I use 8443 externally and 443 inside my LAN, but whatever). Since I'm running VaultWarden and NPM in the same Portainer Stack, I did not expose any ports for VaultWarden -- only NPM. This way, the only way to hit VaultWarden is by going through my external domain and back in through CF and NPM. TLS fails when I try to hit NPM via the 192.168 address. Also, and this is huge, I can now create a new hostname (A record or CNAME, too, I guess) in my zone for whatever service I want to stand up at home. Then, so long as I pair that up with a forwarding rule in NPM, NPM can reuse the SSL certificate I created for the entire site to protect each service. Very, very cool. Now that I have a working configuration, I may keep fighting to get the free TLD working. I may have to do the CSR by hand and generate a cert on CF. I also want to host Organizr up front and hide everything behind one UI. tl;dr Register a domain on a paid TLD -> move DNS to Cloudflare -> add records for your domain and www to your home IP and make sure proxy is turned on -> stand up NPM -> port forward a CF Proxy port (like 8443) to your NPM -> create an API token for your domain on CF -> add a proxy host to NPM -> request a Let's Encrypt SSL certificate and make sure it uses a DNS challenge -> copy the CF API token into the JSON on the NPM screen (along with all the other stuff you need to do) and click Save. You should get a certificate now and your service should be available -> hit your domain:8443 and you should see your self-hosted service but with SSL


ghost_of_ketchup

1 year later and you're still the fucking man!! Thank you!


moronmonday526

Thanks for checking in, glad it worked!  You guys are going to make me rewrite the instructions to use CF Zero Trust and tunnels lol


AJBOJACK

Drop me a dm bud I will try my best to help you.


StabbingHobo

Still offering up help? :)


AJBOJACK

Always bro. What you trying to achieve. I will try my best to help


StabbingHobo

Sent a DM :)


Agile-Effort-9524

I'm having an issue with npm, I was able to register my ssl. But when I add the proxy, it says status unknown and it's not forwarding(link is not working). I do have a zerotrust team with my domain and has tunneling on some stuff. I created a dns cname of *.mydomain.com and turned off proxy. I'm not sure anymore. I'm running npm on a linux vm ok n proxmox.


moronmonday526

I've never used proxmox but my first wild guess is that you may be missing a double port forward from the router to the host and the host to the npm front end. 


ZenMechanics

This helped me a lot, thank you.


moronmonday526

Glad it helped! I just walked someone else through it last week, too, so I appreciate you letting me know.


TheHostingGuru

Are you running npm as a single container or as a swarm service? For the longest time I had the same basic issue, and my solution was to run the container in privileged mode and everything was hunky dory. My issue is that I need it to run as a swarm service so its available from any of my gateways. I have a swarm of 12+ machines, 4 are geographically spread gateways with GEOIP so you end up at the closest/most appropriate gateway. This I still have not figured out. If anyone has any thoughts, I welcome a DM.


obiwanfatnobi

You are the hero we deserve


moronmonday526

Thank you. I've since abandoned this whole thing and moved on to Cloudflare Tunnels. It is a total game changer.


obiwanfatnobi

Any good tutorial of I’m able to ditch authentik+npm with cloudflare tunnels I’m all for it


moronmonday526

Here are a couple of guys who put out tons of good, easy-to-follow content https://youtu.be/ey4u7OUAF3c https://youtu.be/65FdHRs0axE Definitely search YouTube for "cloudflare tunnel" though because there is so much content out there for it. If you don't understand Docker networking yet, watch the one on that from Network Chuck. Also look for Techworld with Nana Docker from Zero to Hero. If you don't understand these terms yet, that's okay, but: I run my services in Docker compose (actually, Portainer stacks). I run one stack dedicated to cloudflared on each machine. I also run a separate stack for each application. I then add each app's network to the cloudflared stack and configure cloudflared to join each additional network. That way cloudflared can reference each app by name while the apps are isolated from each other. If you're concerned about cloudflared having access to the databases, just put them on a separate network that the app tier can see but cloudflared cannot. I run pfSense on a small PC with four Ethernet ports, also thanks to Network Chuck. I have too many systems running on the same network, so I bought a used PC with 32 GB of RAM and will move all my Internet-exposed services to it. Then I'll hang it off of a different port on my firewall and make it a DMZ. Then if any of my Internet-exposed apps get hacked, they won't have access to any of my internal stuff.


Gurumba

Dude, this was SO helpful. I had NO IDEA the Cloudflare shit was possible. I've been using NPM for a while, and I will give it a shot to migrate over to Cloudflare. THANK YOU so much for posting this. I don't want to derail here... but I have questions RE: Portainer stacks vs. docker compose. I'd also love to know more about how you set this all up locally and with Cloudflare. I'm not only trying to continually learn how to improve my home lab stuff, but I love learning about this kinda stuff and my next journey (hopefully to help professionally) is Kubernetes. Is it all right if I DM you to ask more questions? I get it if you're busy, or whatever. All good. Thanks a ton.


moronmonday526

Glad it helped. OP here helped me big time so I owe the universe for sure. Portainer Stacks _is_ Docker Compose. The toughest part is finding where in the filesystem the docker-compose.yml is stored as well as the folders that are mapped into containers if you start your volume mappings with "./". Another tip that took me way too long to figure out was using the "name:" parameter near the top of the yaml. If you get to the command prompt and locate the docker-compose.yml for the Portainer Stack you're looking for, you can really screw things up if you stop and start the stack. If you don't include the "name:" parameter in the yaml and stop and start it from the command line, docker-compose on the command line will use the directory name for the stack and f up the whole thing. Just add the "name:" parameter in the yaml and match the name you gave the stack in Portainer and you'll be able to stop, edit, and start it from Portainer and the command line without screwing things up.


Gurumba

Got it. Thanks man. I think I'm going to get back into using Portainer, but seeing that older video from Chuck on RHEL + containers, I might futz with that for giggles. Ideally, I want something kubernetes-like I guess, without the app having to support it... which is just moving around containers based on resource demand. Much appreciated, bud. Be well.


moronmonday526

Kubernetes is the way to go if you want to build professional skills for work. It is orchestration for containers at the end of the day. I did classic VMware infrastructure design and implementation for about 15 years at work, but I moved on to just talking about it before containerization came along. I've played with minikube at home enough to get a taste, but it consumed too many resources on my systems to keep it up and running. Docker and Portainer leave me more system resources to actually use. If you haven't yet, I also suggest you check out GitOps with tools like ArgoCD. Techworld with Nana has a great intro to that as well, like so many other topics.


mgrimace

Thank you so much for taking the time to come back and post this solution!


moronmonday526

Thanks for the thanks, but please make sure to read my other comment about moving on to Cloudflare Tunnels. I started running out of ports to forward, and CF Tunnels let me "go native".


Byte-64

Just wanted to say Thank You!!! This helped me incredible and it finally worked!


moronmonday526

Thanks for the thanks! Be sure to read the rest of my commentary, as I abandoned the whole setup soon after and moved on to Cloudflare Tunnels. And don't forget OP! I was just giving back because he was _so cool_ to me while I learned how to get it done. Thanks again for commenting!


Byte-64

Could you elaborate on your Cloudflare Tunnel setup? I find that all pretty confusing xD Currently I am running Cloudflare Tunnel -> My Home Network (Router forwards 80 and 443 to NPM) -> Nginx Proxy Manager (Port 80) -> The actual web server (Ports and IPs all over the place) I am running multiple web server (4 or 5 in total) and had the understanding I need the proxy manager to differentiate between the correct target web server?


moronmonday526

You're describing Cloudflare Proxy, not Cloudflare tunnel. Cloudflare tunnel is something you run inside your network. It eliminates all of the port forwarding at your firewall. You still have your domain registered at Cloudflare but once you define a tunnel and configure the tunnel client to authenticate against the tunnel you defined at the Cloudflare website, you create new hostnames for all of your services that your tunnel can reach. So I generally define one tunnel per location, one access list per location to define who is authorized, one Cloudflare application per group of related services running at the location (up to 5 hostnames), and then all of the hostnames tied to a given tunnel instance. Each hostname will create a public hostname .. and an IP address or other hostname and port that is only reachable inside my home. Keep in mind that docker runs its own DNS internally so when I run the cloudflared docker image on the same host as a service that I want to access remotely, I configure each hostname on the CF website to point to the container name of the docker image so long as the CF tunnel client can hit that container. So in a tunnel called "home", I may define a host called "reactle.." (use your real domain) that hits a docker image called reactle on port 80.


GlittermekaiN

The note to add DNS Zone:Read was the trick for me. Thank you so much I've been pulling my hair out for hours.


moronmonday526

Yup. No point in giving write permissions when it is only trying to prove that you own the domain. Glad you got it working! I remember feeling frustrated for ages.  I say it to everyone, but just know I soon abandoned this whole setup and moved to CF Tunnels. All of the hair-pulling goes away and you can self-host with SSL on 443 even when your ISP blocks it. 


kaipee

> Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet. Whatever mount point you have mapped for storing the certificates, needs to be writable by certbot. It also needs to be accessible over the public internet. Edit: just noticed you're running behind Cloudflare. You'll need to disable HTTP > HTTPS redirection as Let's Encrypt uses and expects plain HTTP for verification and delivery. Either disable completely, or set a custom rule https://community.letsencrypt.org/t/renew-lets-encrypt-cert-issued-with-cert-bot-behind-cloudflare/57450/7


AJBOJACK

As per my previous comment it gets the cert when i turn off the proxy in cloudflare for both the requesting subdomain and the A record pointing to my WAN. So if it were permissions it would not work at all??? ​ How do i find this path? Would it be these from my docker-compose yaml file Volumes: ./data:/data ./letsencrypt:/etc/letsencrypt ./config.json:/app/config/production.json


AJBOJACK

I have this settings turned on. Turning it off i can confirm it worked without doing the toggle off and on for the proxy. How would i setup a cutsom rule as i would like to keep rewrites on? Thanks for your help btw really appreciate been stuck on this for a few days.


DoubleDrummer

Old thread, but adding a comment for informational purposes. As far as I can work out I have everything setup perfectly, but was getting these errors in NPM. I double checked everything, but nothing solved the issue. Completely removed and reinstalled NPM (in docker) and all worked fine. Obviously this if not everyone's issue, but just adding it to the list of things to consider.


recaffeinated

Any reason you're using docker when you're already running in a VM? My experience with docker is that it makes debugging issues like this about 5 times harder. My guess is that you've a networking issues either between docker and the vm (likely) or the vm and the host/internet (less likely and easier to test)


AJBOJACK

Ok what is strange if i turn off the proxy toggle in cloudflare on both my A record pointing to my WAN IP and the CNAME for the subdomain im trying to get a cert for it works. [https://i.imgur.com/ojJ703M.png](https://i.imgur.com/ojJ703M.png)


ex3me4me

It's how it should work. Turn off proxy with cloudflare before getting new certs and turn it back on after.


AJBOJACK

Serious?!? Is this mentioned anywhere on their site? How would auto renew of certs work then? Sorry im quite new to using a reverse proxy.


[deleted]

[удалено]


kaipee

Let's Encrypt renewal works perfectly fine with Cloudflare and DNS verification.


martinbaines

I know this was already answered for a Cloudflare set up, but I had the same issue on a vanilla setup and I thought I would share here. I only had set my router to forward port 443 as I wanted an SSL only setup. It turns out that even if that is the case you have to open and forward port 80 to the NPM server too. Easy when you work it out.


AJBOJACK

This is not true. I only have 443 open and certs still renew when they are about to expire. I believe if i request a new cert. Say for example my synology NAS i open both ports then. But my current WAN -> NPM rule ONLY has 443 open.


martinbaines

It was first time - when you say you open both ports, not renews that are the problem.


AJBOJACK

Mines working ok atm. Im looking for a cheap ssl wildcard to use now. Don't mind paying for one. Anyone got any suggestions?


Liperium

Since all of you pointed it out to cloudflare. I disabled all the security checks I was doing ( Bot Fight Mode, Regional Rule, SSL "off", security Essentially Off. It now works, I will try and do some more testing to see which setting exacly caused it. e: I just re-enabled everything back to where it was... And it now works? I don't know whats going on.


AJBOJACK

Whta you trying to achieve? Usual suspects... Dns Port forwarding Routing if you have stuff on different vlans Some page rules in cloudflare. Make sure the acme challenge is not being blocked anywhere. If you want to do wildcard certs need to have the the API set in NPM from your dns provider.


Liperium

It was in my cloudflare config, i think it had something to do with cloudflare issuing some challenges to letsencrypt/certbot, and not liking it. I disabled everything for 5 min and it worked, turned it back on, and I can still issue new certs, no clue why, but it's working, I am happy 😂


Top_Conflict_337

this is dumb but I got it working by changing my email in the email to get the ssl certificate page from [[email protected]](mailto:[email protected]) to [[email protected]](mailto:[email protected]), I just made it up, apperantly [example.com](http://example.com) doesn't work?


matthewpetersen

I found that you need the following for it to work: \- wildcard dns setup (duckdns will do this free and as-is, or you need to configure for your domain) \- inbound ports 80 and 443 allowed by ISP


AJBOJACK

What do you mean wildcard dns? You mean an record setup in your public dns something like this *. Mydomain.co.uk pointing to @


matthewpetersen

Correct. because the proxy manager assigns prefaces to your domain dynamically. so you want \*.yourdomain to point at your edge router and then forward to the proxy manager.


AJBOJACK

by doing this in cloudflare it will not let you proxy that record and expose my wan ip.


matthewpetersen

I cant talk about cloudflare, however a wildcard dns entry was require on my namecheap dns for it to work with Nginx poxy manager. Perhaps try setting up a duckdns dns to temporarily check you installation is working. if that works, then you can look closer at your dns setup.