Our great sponsors
-
I appreciate the reply. I took some time to look at your example so I can give some feedback on where I end up when I think about building / maintaining my own image.
My immediate reaction is that the example is nice as a one-off build, but it's much more complex if I need to set up something I can maintain long term. I might be overthinking it, but in the context of thinking about something I can maintain my thought process is below. The questions are mostly rhetorical.
First, what versions am I getting? Does using `2.5.1-builder` result in a customer built binary that's version `2.5.1`? The command usage [1] of the `xcaddy` command says it falls back to the `CADDY_VERSION` environment variable if it's not set explicitly. Since it's not set explicitly, I go looking for that variable in the Dockerfile [2].
That's some templating language I'm not familiar with and I can't track down where the variable gets set, at least not quickly. I'd probably have to spend an hour learning how those templates work to figure it out. To make a quicker, educated guess, it most likely matches the builder version. The docs said the version can be set to any git ref, so I can explicitly set it to v2.5.1 on the command line [3] to be certain.
Now, what version of `caddy-dns/cloudflare` am I getting? The xcaddy custom builds section of the docs [4] says the version can optionally be specified, but it's not specified in the above example. There aren't any tags in the repo, so it's probably building off `master`. The doc says it functions similar to `go get`, but doesn't explain what the differences are and the default behavior isn't explained either.
The docs for `go get` [6] say it can use a revision, so maybe a specific commit can be used for that, but I'd need to test it since I'm not super familiar with Golang.
What other risks come along with building and maintaining my own custom image? I could end up with a subtly broken build that only occurs in my environment. Portability doesn't guarantee compatibility [7] and building custom images increases the risk of compatibility issues beyond what I get with official images (building and running vs just running). That blog post is a really cool read on it's own BTW.
I need to consider the potential for breakage even if it's miniscule because my Docker infrastructure is self hosted and will be sitting behind my custom built Caddy image. If my custom image breaks, I need a guaranteed way of having access to a previous, known good version. This is as simple as publishing the images externally, but adds an extra step since I'll need an account at a registry and need to integrate pushes to that registry into my build.
If I build a custom image, do I let other people I help with the odd tech thing use it or is all the effort for me only? I don't want to become the maintainer of a Docker image others rely on, so I can't even re-use any related config if I help others in the future since they won't have access to the needed image.
To be fair, I also see things I don't like in the NGINX Proxy Manager Dockerfile [7]. The two that immediately jump out at me are things I consider common mistakes. Both require unlucky timing to fail, but can technically cause failure IMO. The first is using `apt-get update` which will exit 0 on failure and has the potential to leave `apt-get install` running against obsolete versions. The second is using `apt-get update` in multiple parts of a multistage build. If I were doing it I'd run `apt-get update` in a base image and avoid it in the builder + runtime images to guarantee the versions stay the same between the build container and the runtime container.
It took me about 1h to work through that and write this comment, so it's not just a matter of building a Docker image and plugging in the config. There's a lot of nuance that goes into maintaining a Docker image (I'm sure you know that already) and not having an image with the DNS plugin(s) baked in is a show stopper for anyone like me that can't justify maintaining their own.
Also, a 4 line Docker file looks nice in terms of being simple, but explicitly declaring or even adding comments describing some of the things I pointed out above can save people a lot of time. Even comments with links to the relevant portions of the docs would be super useful.
My reason for wanting the Cloudflare DNS plugin is that I have some things I want to run 100% locally without ever exposing them to the internet. The desire for wildcard certificates is to keep things from being discoverable via CTLogs.
I hope that's useful feedback. I realize someone bemoaning the difficulty of running your stuff at home lab / small business scale isn't exactly the target audience in terms of picking up customers that pay the bills. Thanks again for the reply / example.
1. https://github.com/caddyserver/xcaddy#command-usage
2. https://github.com/caddyserver/caddy-docker/blob/master/Dock...
3. https://github.com/caddyserver/caddy/tree/v2.5.1
4. https://github.com/caddyserver/xcaddy#custom-builds
5. https://github.com/caddy-dns/cloudflare/tags
6. https://go.dev/ref/mod#go-get
7. https://www.redhat.com/en/blog/containers-understanding-diff...
8. https://github.com/NginxProxyManager/docker-nginx-full/blob/...
-
Just a quick query from someone who has beginner level experience in NGINX and Caddy. Doesn't this PR [1] for Traefik imply that trailing dots for FQDNs are fixed?
-
SonarQube
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
-
I appreciate the reply. I took some time to look at your example so I can give some feedback on where I end up when I think about building / maintaining my own image.
My immediate reaction is that the example is nice as a one-off build, but it's much more complex if I need to set up something I can maintain long term. I might be overthinking it, but in the context of thinking about something I can maintain my thought process is below. The questions are mostly rhetorical.
First, what versions am I getting? Does using `2.5.1-builder` result in a customer built binary that's version `2.5.1`? The command usage [1] of the `xcaddy` command says it falls back to the `CADDY_VERSION` environment variable if it's not set explicitly. Since it's not set explicitly, I go looking for that variable in the Dockerfile [2].
That's some templating language I'm not familiar with and I can't track down where the variable gets set, at least not quickly. I'd probably have to spend an hour learning how those templates work to figure it out. To make a quicker, educated guess, it most likely matches the builder version. The docs said the version can be set to any git ref, so I can explicitly set it to v2.5.1 on the command line [3] to be certain.
Now, what version of `caddy-dns/cloudflare` am I getting? The xcaddy custom builds section of the docs [4] says the version can optionally be specified, but it's not specified in the above example. There aren't any tags in the repo, so it's probably building off `master`. The doc says it functions similar to `go get`, but doesn't explain what the differences are and the default behavior isn't explained either.
The docs for `go get` [6] say it can use a revision, so maybe a specific commit can be used for that, but I'd need to test it since I'm not super familiar with Golang.
What other risks come along with building and maintaining my own custom image? I could end up with a subtly broken build that only occurs in my environment. Portability doesn't guarantee compatibility [7] and building custom images increases the risk of compatibility issues beyond what I get with official images (building and running vs just running). That blog post is a really cool read on it's own BTW.
I need to consider the potential for breakage even if it's miniscule because my Docker infrastructure is self hosted and will be sitting behind my custom built Caddy image. If my custom image breaks, I need a guaranteed way of having access to a previous, known good version. This is as simple as publishing the images externally, but adds an extra step since I'll need an account at a registry and need to integrate pushes to that registry into my build.
If I build a custom image, do I let other people I help with the odd tech thing use it or is all the effort for me only? I don't want to become the maintainer of a Docker image others rely on, so I can't even re-use any related config if I help others in the future since they won't have access to the needed image.
To be fair, I also see things I don't like in the NGINX Proxy Manager Dockerfile [7]. The two that immediately jump out at me are things I consider common mistakes. Both require unlucky timing to fail, but can technically cause failure IMO. The first is using `apt-get update` which will exit 0 on failure and has the potential to leave `apt-get install` running against obsolete versions. The second is using `apt-get update` in multiple parts of a multistage build. If I were doing it I'd run `apt-get update` in a base image and avoid it in the builder + runtime images to guarantee the versions stay the same between the build container and the runtime container.
It took me about 1h to work through that and write this comment, so it's not just a matter of building a Docker image and plugging in the config. There's a lot of nuance that goes into maintaining a Docker image (I'm sure you know that already) and not having an image with the DNS plugin(s) baked in is a show stopper for anyone like me that can't justify maintaining their own.
Also, a 4 line Docker file looks nice in terms of being simple, but explicitly declaring or even adding comments describing some of the things I pointed out above can save people a lot of time. Even comments with links to the relevant portions of the docs would be super useful.
My reason for wanting the Cloudflare DNS plugin is that I have some things I want to run 100% locally without ever exposing them to the internet. The desire for wildcard certificates is to keep things from being discoverable via CTLogs.
I hope that's useful feedback. I realize someone bemoaning the difficulty of running your stuff at home lab / small business scale isn't exactly the target audience in terms of picking up customers that pay the bills. Thanks again for the reply / example.
1. https://github.com/caddyserver/xcaddy#command-usage
2. https://github.com/caddyserver/caddy-docker/blob/master/Dock...
3. https://github.com/caddyserver/caddy/tree/v2.5.1
4. https://github.com/caddyserver/xcaddy#custom-builds
5. https://github.com/caddy-dns/cloudflare/tags
6. https://go.dev/ref/mod#go-get
7. https://www.redhat.com/en/blog/containers-understanding-diff...
8. https://github.com/NginxProxyManager/docker-nginx-full/blob/...
-
Nginx Proxy Manager
Docker container for managing Nginx proxy hosts with a simple, powerful interface
-
You can also do this with Caddy, using this plugin: https://github.com/lucaslorentz/caddy-docker-proxy
-
-
-
Scout APM
Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.
-
I appreciate the reply. I took some time to look at your example so I can give some feedback on where I end up when I think about building / maintaining my own image.
My immediate reaction is that the example is nice as a one-off build, but it's much more complex if I need to set up something I can maintain long term. I might be overthinking it, but in the context of thinking about something I can maintain my thought process is below. The questions are mostly rhetorical.
First, what versions am I getting? Does using `2.5.1-builder` result in a customer built binary that's version `2.5.1`? The command usage [1] of the `xcaddy` command says it falls back to the `CADDY_VERSION` environment variable if it's not set explicitly. Since it's not set explicitly, I go looking for that variable in the Dockerfile [2].
That's some templating language I'm not familiar with and I can't track down where the variable gets set, at least not quickly. I'd probably have to spend an hour learning how those templates work to figure it out. To make a quicker, educated guess, it most likely matches the builder version. The docs said the version can be set to any git ref, so I can explicitly set it to v2.5.1 on the command line [3] to be certain.
Now, what version of `caddy-dns/cloudflare` am I getting? The xcaddy custom builds section of the docs [4] says the version can optionally be specified, but it's not specified in the above example. There aren't any tags in the repo, so it's probably building off `master`. The doc says it functions similar to `go get`, but doesn't explain what the differences are and the default behavior isn't explained either.
The docs for `go get` [6] say it can use a revision, so maybe a specific commit can be used for that, but I'd need to test it since I'm not super familiar with Golang.
What other risks come along with building and maintaining my own custom image? I could end up with a subtly broken build that only occurs in my environment. Portability doesn't guarantee compatibility [7] and building custom images increases the risk of compatibility issues beyond what I get with official images (building and running vs just running). That blog post is a really cool read on it's own BTW.
I need to consider the potential for breakage even if it's miniscule because my Docker infrastructure is self hosted and will be sitting behind my custom built Caddy image. If my custom image breaks, I need a guaranteed way of having access to a previous, known good version. This is as simple as publishing the images externally, but adds an extra step since I'll need an account at a registry and need to integrate pushes to that registry into my build.
If I build a custom image, do I let other people I help with the odd tech thing use it or is all the effort for me only? I don't want to become the maintainer of a Docker image others rely on, so I can't even re-use any related config if I help others in the future since they won't have access to the needed image.
To be fair, I also see things I don't like in the NGINX Proxy Manager Dockerfile [7]. The two that immediately jump out at me are things I consider common mistakes. Both require unlucky timing to fail, but can technically cause failure IMO. The first is using `apt-get update` which will exit 0 on failure and has the potential to leave `apt-get install` running against obsolete versions. The second is using `apt-get update` in multiple parts of a multistage build. If I were doing it I'd run `apt-get update` in a base image and avoid it in the builder + runtime images to guarantee the versions stay the same between the build container and the runtime container.
It took me about 1h to work through that and write this comment, so it's not just a matter of building a Docker image and plugging in the config. There's a lot of nuance that goes into maintaining a Docker image (I'm sure you know that already) and not having an image with the DNS plugin(s) baked in is a show stopper for anyone like me that can't justify maintaining their own.
Also, a 4 line Docker file looks nice in terms of being simple, but explicitly declaring or even adding comments describing some of the things I pointed out above can save people a lot of time. Even comments with links to the relevant portions of the docs would be super useful.
My reason for wanting the Cloudflare DNS plugin is that I have some things I want to run 100% locally without ever exposing them to the internet. The desire for wildcard certificates is to keep things from being discoverable via CTLogs.
I hope that's useful feedback. I realize someone bemoaning the difficulty of running your stuff at home lab / small business scale isn't exactly the target audience in terms of picking up customers that pay the bills. Thanks again for the reply / example.
1. https://github.com/caddyserver/xcaddy#command-usage
2. https://github.com/caddyserver/caddy-docker/blob/master/Dock...
3. https://github.com/caddyserver/caddy/tree/v2.5.1
4. https://github.com/caddyserver/xcaddy#custom-builds
5. https://github.com/caddy-dns/cloudflare/tags
6. https://go.dev/ref/mod#go-get
7. https://www.redhat.com/en/blog/containers-understanding-diff...
8. https://github.com/NginxProxyManager/docker-nginx-full/blob/...
-
I appreciate the reply. I took some time to look at your example so I can give some feedback on where I end up when I think about building / maintaining my own image.
My immediate reaction is that the example is nice as a one-off build, but it's much more complex if I need to set up something I can maintain long term. I might be overthinking it, but in the context of thinking about something I can maintain my thought process is below. The questions are mostly rhetorical.
First, what versions am I getting? Does using `2.5.1-builder` result in a customer built binary that's version `2.5.1`? The command usage [1] of the `xcaddy` command says it falls back to the `CADDY_VERSION` environment variable if it's not set explicitly. Since it's not set explicitly, I go looking for that variable in the Dockerfile [2].
That's some templating language I'm not familiar with and I can't track down where the variable gets set, at least not quickly. I'd probably have to spend an hour learning how those templates work to figure it out. To make a quicker, educated guess, it most likely matches the builder version. The docs said the version can be set to any git ref, so I can explicitly set it to v2.5.1 on the command line [3] to be certain.
Now, what version of `caddy-dns/cloudflare` am I getting? The xcaddy custom builds section of the docs [4] says the version can optionally be specified, but it's not specified in the above example. There aren't any tags in the repo, so it's probably building off `master`. The doc says it functions similar to `go get`, but doesn't explain what the differences are and the default behavior isn't explained either.
The docs for `go get` [6] say it can use a revision, so maybe a specific commit can be used for that, but I'd need to test it since I'm not super familiar with Golang.
What other risks come along with building and maintaining my own custom image? I could end up with a subtly broken build that only occurs in my environment. Portability doesn't guarantee compatibility [7] and building custom images increases the risk of compatibility issues beyond what I get with official images (building and running vs just running). That blog post is a really cool read on it's own BTW.
I need to consider the potential for breakage even if it's miniscule because my Docker infrastructure is self hosted and will be sitting behind my custom built Caddy image. If my custom image breaks, I need a guaranteed way of having access to a previous, known good version. This is as simple as publishing the images externally, but adds an extra step since I'll need an account at a registry and need to integrate pushes to that registry into my build.
If I build a custom image, do I let other people I help with the odd tech thing use it or is all the effort for me only? I don't want to become the maintainer of a Docker image others rely on, so I can't even re-use any related config if I help others in the future since they won't have access to the needed image.
To be fair, I also see things I don't like in the NGINX Proxy Manager Dockerfile [7]. The two that immediately jump out at me are things I consider common mistakes. Both require unlucky timing to fail, but can technically cause failure IMO. The first is using `apt-get update` which will exit 0 on failure and has the potential to leave `apt-get install` running against obsolete versions. The second is using `apt-get update` in multiple parts of a multistage build. If I were doing it I'd run `apt-get update` in a base image and avoid it in the builder + runtime images to guarantee the versions stay the same between the build container and the runtime container.
It took me about 1h to work through that and write this comment, so it's not just a matter of building a Docker image and plugging in the config. There's a lot of nuance that goes into maintaining a Docker image (I'm sure you know that already) and not having an image with the DNS plugin(s) baked in is a show stopper for anyone like me that can't justify maintaining their own.
Also, a 4 line Docker file looks nice in terms of being simple, but explicitly declaring or even adding comments describing some of the things I pointed out above can save people a lot of time. Even comments with links to the relevant portions of the docs would be super useful.
My reason for wanting the Cloudflare DNS plugin is that I have some things I want to run 100% locally without ever exposing them to the internet. The desire for wildcard certificates is to keep things from being discoverable via CTLogs.
I hope that's useful feedback. I realize someone bemoaning the difficulty of running your stuff at home lab / small business scale isn't exactly the target audience in terms of picking up customers that pay the bills. Thanks again for the reply / example.
1. https://github.com/caddyserver/xcaddy#command-usage
2. https://github.com/caddyserver/caddy-docker/blob/master/Dock...
3. https://github.com/caddyserver/caddy/tree/v2.5.1
4. https://github.com/caddyserver/xcaddy#custom-builds
5. https://github.com/caddy-dns/cloudflare/tags
6. https://go.dev/ref/mod#go-get
7. https://www.redhat.com/en/blog/containers-understanding-diff...
8. https://github.com/NginxProxyManager/docker-nginx-full/blob/...
-
docker-nginx-full
Docker image with compiled Nginx (OpenResty) and OpenSSL with all the Nginx plugins enabled.
I appreciate the reply. I took some time to look at your example so I can give some feedback on where I end up when I think about building / maintaining my own image.
My immediate reaction is that the example is nice as a one-off build, but it's much more complex if I need to set up something I can maintain long term. I might be overthinking it, but in the context of thinking about something I can maintain my thought process is below. The questions are mostly rhetorical.
First, what versions am I getting? Does using `2.5.1-builder` result in a customer built binary that's version `2.5.1`? The command usage [1] of the `xcaddy` command says it falls back to the `CADDY_VERSION` environment variable if it's not set explicitly. Since it's not set explicitly, I go looking for that variable in the Dockerfile [2].
That's some templating language I'm not familiar with and I can't track down where the variable gets set, at least not quickly. I'd probably have to spend an hour learning how those templates work to figure it out. To make a quicker, educated guess, it most likely matches the builder version. The docs said the version can be set to any git ref, so I can explicitly set it to v2.5.1 on the command line [3] to be certain.
Now, what version of `caddy-dns/cloudflare` am I getting? The xcaddy custom builds section of the docs [4] says the version can optionally be specified, but it's not specified in the above example. There aren't any tags in the repo, so it's probably building off `master`. The doc says it functions similar to `go get`, but doesn't explain what the differences are and the default behavior isn't explained either.
The docs for `go get` [6] say it can use a revision, so maybe a specific commit can be used for that, but I'd need to test it since I'm not super familiar with Golang.
What other risks come along with building and maintaining my own custom image? I could end up with a subtly broken build that only occurs in my environment. Portability doesn't guarantee compatibility [7] and building custom images increases the risk of compatibility issues beyond what I get with official images (building and running vs just running). That blog post is a really cool read on it's own BTW.
I need to consider the potential for breakage even if it's miniscule because my Docker infrastructure is self hosted and will be sitting behind my custom built Caddy image. If my custom image breaks, I need a guaranteed way of having access to a previous, known good version. This is as simple as publishing the images externally, but adds an extra step since I'll need an account at a registry and need to integrate pushes to that registry into my build.
If I build a custom image, do I let other people I help with the odd tech thing use it or is all the effort for me only? I don't want to become the maintainer of a Docker image others rely on, so I can't even re-use any related config if I help others in the future since they won't have access to the needed image.
To be fair, I also see things I don't like in the NGINX Proxy Manager Dockerfile [7]. The two that immediately jump out at me are things I consider common mistakes. Both require unlucky timing to fail, but can technically cause failure IMO. The first is using `apt-get update` which will exit 0 on failure and has the potential to leave `apt-get install` running against obsolete versions. The second is using `apt-get update` in multiple parts of a multistage build. If I were doing it I'd run `apt-get update` in a base image and avoid it in the builder + runtime images to guarantee the versions stay the same between the build container and the runtime container.
It took me about 1h to work through that and write this comment, so it's not just a matter of building a Docker image and plugging in the config. There's a lot of nuance that goes into maintaining a Docker image (I'm sure you know that already) and not having an image with the DNS plugin(s) baked in is a show stopper for anyone like me that can't justify maintaining their own.
Also, a 4 line Docker file looks nice in terms of being simple, but explicitly declaring or even adding comments describing some of the things I pointed out above can save people a lot of time. Even comments with links to the relevant portions of the docs would be super useful.
My reason for wanting the Cloudflare DNS plugin is that I have some things I want to run 100% locally without ever exposing them to the internet. The desire for wildcard certificates is to keep things from being discoverable via CTLogs.
I hope that's useful feedback. I realize someone bemoaning the difficulty of running your stuff at home lab / small business scale isn't exactly the target audience in terms of picking up customers that pay the bills. Thanks again for the reply / example.
1. https://github.com/caddyserver/xcaddy#command-usage
2. https://github.com/caddyserver/caddy-docker/blob/master/Dock...
3. https://github.com/caddyserver/caddy/tree/v2.5.1
4. https://github.com/caddyserver/xcaddy#custom-builds
5. https://github.com/caddy-dns/cloudflare/tags
6. https://go.dev/ref/mod#go-get
7. https://www.redhat.com/en/blog/containers-understanding-diff...
8. https://github.com/NginxProxyManager/docker-nginx-full/blob/...
-
Related posts
- How to deploy NextCloud in your Linux Server with docker and SSL
- How can I auto renew letsencrypt cert and auto reload Nginx docker container?
- [Tutorial] How to build a SvelteKit Docker image to run SvelteKit adapter-node in a container
- Secure your traefik dashboard with HTTPS and Basic Auth
- Docker NGINX Reverse Proxy to other containers