We don't load the provider directly, because the lego provider types
aren't designed for JSON configuration and they are not implemented
as Caddy modules (there are some setup steps which a Provision call
would need to do, but they do not have Provision methods, they have
their own constructor functions that we have to wrap).
Instead of loading the challenge providers directly, the modules are
simple wrappers over the challenge providers, to facilitate the JSON
config structure and to provide a consistent experience. This also lets
us swap out the underlying challenge providers transparently if needed;
it acts as a layer of abstraction.
This is temporary as we prepare for a stable v2 release. We don't want
to make promises we don't know we can keep, and the Starlark integration
deserves much more focused attention which resources and funding do not
currently permit. When the project is financially stable, I will be able
to revisit this properly and add flexible, robust Starlark scripting
support to Caddy 2.
If user provides their own certs or makes any hostname-specific TLS
connection policy, it means that no TLS connection would be served for
any other hostnames, even though you'd expect that TLS is enabled for
them, too. So now we append a catch-all conn policy if none exist, which
allows all ClientHellos to be matched and served.
We also fix the consolidation of automation policies, which previously
gobbled up automation policies without hosts in favor of automation
policies with hosts. Instead of a host-specific policy eating up an
identical catch-all policy, the catch-all policy eats up the identical
host-specific policy, ensuring that the policy is applied to all hosts
which need it.
See also:
https://caddy.community/t/v2-automatic-https-certificate-errors/6847/9?u=matt
This ensure that if there are multiple certs that match a particular
ServerName or other parameter, then specifically the one the user
provided in the Caddyfile will be used.
This is necessary to avoid a race for sockets. Both the HTTP servers and
CertMagic solvers will try to bind the HTTP/HTTPS ports, but we need to
make sure that our HTTP servers bind first. This is kind of a new thing
now that management is async in Caddy 2.
Also update to CertMagic 0.9.2, which fixes some async use cases at
scale.
See https://caddy.community/t/caddy-server-that-returns-only-ip-address-as-text/6928/6?u=matt
In most cases, we will want to apply header operations immediately,
rather than waiting until the response is written. The exceptions are
generally going to be if we are deleting a header field or if a field is
to be overwritten. We now automatically defer header ops if deleting a
header field, and allow the user to manually enable deferred mode with
the defer subdirective.
Paths always begin with a slash, and omitting the leading slash could be
convenient to avoid confusion with a path matcher in the Caddyfile. I do
not think there would be any harm to implicitly add the leading slash.
Before, listener ports could be wrong because ParseAddress doesn't know
about the user-configured HTTP/HTTPS ports, instead hard-coding port 80
or 443, which could be wrong if the user changed them to something else.
Now we defer port and scheme validation/inference to a later part of
building the output JSON.
* v2: add documentation for circuit breaker config and "random selection" load balancing policy
* v2: rename circuit breaker config inline key from `type` to `breaker` to avoid json key clash between the `circuit_breaker` type and the `type` field of the generic circuit breaker Config struct used by circuit breaking implementations
* v2: restore the circuit breaker inline key to `type` and rename the name circuit breaker config field from `Type` to `Factor`
Using rewrite is like saying, "I accept this request, but I just need
to act on it as if it came in differently."
Whereas redir implies more of, "I reject this request, send it to me
differently, then I will process it."
Makes sense for it to come before rewrites. This can always be changed
using the 'order' global option if needed.
The fix that was initially put forth in #2971 was good, but only for
up to one layer of nesting. The real problem was that we forgot to
increment nesting when already inside a block if we saw another open
curly brace that opens another block (dispenser.go L157-158).
The new 'handle' directive allows HTTP Caddyfiles to be designed more
like nginx location blocks if the user prefers. Inside a handle block,
directives are still ordered just like they are outside of them, but
handler blocks at a given level of nesting are mutually exclusive.
This work benefitted from some refactoring and cleanup.
This allows individual directives to be ordered relative to others,
where order matters (for example HTTP handlers). Will primarily be
useful when developing new directives, so you don't have to modify the
Caddy source code. Can also be useful if you prefer that redir comes
before rewrite, for example. Note that these are global options. The
route directive can be used to give a specific order to a specific group
of HTTP handler directives.
In the v1 Caddyfile, only the first matching site definition would be
used, so setting these `Terminal: true` ensures that only the first
matching one is used in v2, too.
We also have to sort by key specificity... Caddy 1 had a special data
structure for selecting the most specific site definition, but we don't
have that structure in v2, so we need to sort by length (of host and
path, separately). For blocks where more than one key is present, we
choose the longest host and path (independently, need not be from same
key) by which to sort.
Before, modifying the path might have affected how a new query string
was built if the query string relied on the path. Now, we build each
component in isolation and only change the URI on the request later.
Also, prevent trailing & in query string.
This splits automatic HTTPS into two phases. The first provisions the
route matchers and uses them to build the domain set and configure
auto HTTP->HTTPS redirects. This happens before the rest of the
provisioning does.
The second phase takes place at the beginning of the app start. It
attaches pointers to the tls app to each server, and begins certificate
management for the domains that were found in the first phase.
Our new parser also preserves original parameter order, rather than
re-encoding using the std lib (which sorts).
The renamed parameters are a breaking change but they're new enough
that I don't think anyone is using them.
When we append a token to the new dispenser, we need to consume it in the parent, too; otherwise it gets scanned twice, which in this case messed up the nesting count which got decremented once too many times.
* fix replacing variables on imported files
* refactored replaceEnvVars to ensure it is always called
* Use byte slices for easier use
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>