The documentation specifies that the hash algorithm defaults to bcrypt.
However, the implementation returns an error in provision if no hash is
provided.
Fix this inconsistency by *actually* defaulting to bcrypt.
We don't load the provider directly, because the lego provider types
aren't designed for JSON configuration and they are not implemented
as Caddy modules (there are some setup steps which a Provision call
would need to do, but they do not have Provision methods, they have
their own constructor functions that we have to wrap).
Instead of loading the challenge providers directly, the modules are
simple wrappers over the challenge providers, to facilitate the JSON
config structure and to provide a consistent experience. This also lets
us swap out the underlying challenge providers transparently if needed;
it acts as a layer of abstraction.
This is temporary as we prepare for a stable v2 release. We don't want
to make promises we don't know we can keep, and the Starlark integration
deserves much more focused attention which resources and funding do not
currently permit. When the project is financially stable, I will be able
to revisit this properly and add flexible, robust Starlark scripting
support to Caddy 2.
This ensure that if there are multiple certs that match a particular
ServerName or other parameter, then specifically the one the user
provided in the Caddyfile will be used.
This is necessary to avoid a race for sockets. Both the HTTP servers and
CertMagic solvers will try to bind the HTTP/HTTPS ports, but we need to
make sure that our HTTP servers bind first. This is kind of a new thing
now that management is async in Caddy 2.
Also update to CertMagic 0.9.2, which fixes some async use cases at
scale.
See https://caddy.community/t/caddy-server-that-returns-only-ip-address-as-text/6928/6?u=matt
In most cases, we will want to apply header operations immediately,
rather than waiting until the response is written. The exceptions are
generally going to be if we are deleting a header field or if a field is
to be overwritten. We now automatically defer header ops if deleting a
header field, and allow the user to manually enable deferred mode with
the defer subdirective.
Paths always begin with a slash, and omitting the leading slash could be
convenient to avoid confusion with a path matcher in the Caddyfile. I do
not think there would be any harm to implicitly add the leading slash.
* v2: add documentation for circuit breaker config and "random selection" load balancing policy
* v2: rename circuit breaker config inline key from `type` to `breaker` to avoid json key clash between the `circuit_breaker` type and the `type` field of the generic circuit breaker Config struct used by circuit breaking implementations
* v2: restore the circuit breaker inline key to `type` and rename the name circuit breaker config field from `Type` to `Factor`
The fix that was initially put forth in #2971 was good, but only for
up to one layer of nesting. The real problem was that we forgot to
increment nesting when already inside a block if we saw another open
curly brace that opens another block (dispenser.go L157-158).
The new 'handle' directive allows HTTP Caddyfiles to be designed more
like nginx location blocks if the user prefers. Inside a handle block,
directives are still ordered just like they are outside of them, but
handler blocks at a given level of nesting are mutually exclusive.
This work benefitted from some refactoring and cleanup.
Before, modifying the path might have affected how a new query string
was built if the query string relied on the path. Now, we build each
component in isolation and only change the URI on the request later.
Also, prevent trailing & in query string.
This splits automatic HTTPS into two phases. The first provisions the
route matchers and uses them to build the domain set and configure
auto HTTP->HTTPS redirects. This happens before the rest of the
provisioning does.
The second phase takes place at the beginning of the app start. It
attaches pointers to the tls app to each server, and begins certificate
management for the domains that were found in the first phase.
Our new parser also preserves original parameter order, rather than
re-encoding using the std lib (which sorts).
The renamed parameters are a breaking change but they're new enough
that I don't think anyone is using them.
* http: path matcher: exact match by default; substring matches (#2959)
This is a breaking change.
* caddyfile: Change "matcher" directive to "@matcher" syntax (#2959)
* cmd: Assume caddyfile adapter for config files named Caddyfile
* Sub-sort handlers by path matcher length (#2959)
Caddyfile-generated subroutes have handlers, which are sorted first by
directive order (this is unchanged), but within directives we now sort
by specificity of path matcher in descending order (longest path first,
assuming that longest path is most specific).
This only applies if there is only one matcher set, and the path
matcher in that set has only one path in it. Path matchers with two or
more paths are not sorted like this; and routes with more than one
matcher set are not sorted like this either, since specificity is
difficult or impossible to infer correctly.
This is a special case, but definitely a very common one, as a lot of
routing decisions are based on paths.
* caddyfile: New 'route' directive for appearance-order handling (#2959)
* caddyfile: Make rewrite directives mutually exclusive (#2959)
This applies only to rewrites in the top-level subroute created by the
HTTP caddyfile.
Previously, all matchers in a route would be evaluated before any
handlers were executed, and a composite route of the matching routes
would be created. This made rewrites especially tricky, since the only
way to defer later matchers' evaluation was to wrap them in a subroute,
or to invoke a "rehandle" which often caused bugs.
Instead, this new sequential design evaluates each route's matchers then
its handlers in lock-step; matcher-handlers-matcher-handlers...
If the first matching route consists of a rewrite, then the second route
will be evaluated against the rewritten request, rather than the original
one, and so on.
This should do away with any need for rehandling.
I've also taken this opportunity to avoid adding new values to the
request context in the handler chain, as this creates a copy of the
Request struct, which may possibly lead to bugs like it has in the past
(see PR #1542, PR #1481, and maybe issue #2463). We now add all the
expected context values in the top-level handler at the server, then
any new values can be added to the variable table via the VarsCtxKey
context key, or just the GetVar/SetVar functions. In particular, we are
using this facility to convey dial information in the reverse proxy.
Had to be careful in one place as the middleware compilation logic has
changed, and moved a bit. We no longer compile a middleware chain per-
request; instead, we can compile it at provision-time, and defer only the
evaluation of matchers to request-time, which should slightly improve
performance. Doing this, however, we take advantage of multiple function
closures, and we also changed the use of HandlerFunc (function pointer)
to Handler (interface)... this led to a situation where, if we aren't
careful, allows one request routed a certain way to permanently change
the "next" handler for all/most other requests! We avoid this by making
a copy of the interface value (which is a lightweight pointer copy) and
using exclusively that within our wrapped handlers. This way, the
original stack frame is preserved in a "read-only" fashion. The comments
in the code describe this phenomenon.
This may very well be a breaking change for some configurations, however
I do not expect it to impact many people. I will make it clear in the
release notes that this change has occurred.
Allows specifying ca certs with by filename in
`reverse_proxy.transport`.
Example
```
reverse_proxy /api api:443 {
transport http {
tls
tls_trusted_ca_certs certs/rootCA.pem
}
}
```
It seems silly to have to add a single, empty TLS connection policy to
a server to enable TLS when it's only listening on the HTTPS port. We
now do this for the user as part of automatic HTTPS (thus, it can be
disabled / overridden).
See https://caddy.community/t/v2-catch-all-server-with-automatic-tls/6692/2?u=matt
This commit goes a long way toward making automated documentation of
Caddy config and Caddy modules possible. It's a broad, sweeping change,
but mostly internal. It allows us to automatically generate docs for all
Caddy modules (including future third-party ones) and make them viewable
on a web page; it also doubles as godoc comments.
As such, this commit makes significant progress in migrating the docs
from our temporary wiki page toward our new website which is still under
construction.
With this change, all host modules will use ctx.LoadModule() and pass in
both the struct pointer and the field name as a string. This allows the
reflect package to read the struct tag from that field so that it can
get the necessary information like the module namespace and the inline
key.
This has the nice side-effect of unifying the code and documentation. It
also simplifies module loading, and handles several variations on field
types for raw module fields (i.e. variations on json.RawMessage, such as
arrays and maps).
I also renamed ModuleInfo.Name -> ModuleInfo.ID, to make it clear that
the ID is the "full name" which includes both the module namespace and
the name. This clarity is helpful when describing module hierarchy.
As of this change, Caddy modules are no longer an experimental design.
I think the architecture is good enough to go forward.
Adds tests for both the path matcher and host matcher for case
insensitivity.
If case sensitivity is required for the path, a regexp matcher can
be used instead.
This is the v2 equivalent fix of PR #2882.
* fix OOM issue caught by fuzzing
* use ParsedAddress as the struct name for the result of ParseNetworkAddress
* simplify code using the ParsedAddress type
* minor cleanups
* Add support for placeholders in Config
Fixes#2870
* Replace placeholders only in logging config.
Placeholders in log level and filename incase of file output are replaced.
* Add Provision to filewriter module for replacing placeholders
Errors in the 4xx range are client errors, and they don't need to be
entered into the server's error logs. 4xx errors are still recorded in
the access logs at the error level.
This makes it easier to make "standard" caddy builds, since you'll only
need to add a single import to get all of Caddy's standard modules.
There is a package for all of Caddy's standard modules (modules/standard)
and a package for the HTTP app's standard modules only
(modules/caddyhttp/standard).
We still need to decide which of these, if not all of them, should be
kept in the standard build. Those which aren't should be moved out of
this repo. See #2780.