* httpcaddyfile: Begin implementing log directive, and debug mode
For now, debug mode just sets the log level for all logs to DEBUG
(unless a level is specified explicitly).
* httpcaddyfile: Finish 'log' directive
Also rename StringEncoder -> SingleFieldEncoder
* Fix minor bug in replacer (when vals are empty)
* caddytls: Add CipherSuiteName and ProtocolName functions
The cipher_suites.go file is derived from a commit to the Go master
branch that's slated for Go 1.14. Once Go 1.14 is released, this file
can be removed.
* caddyhttp: Use commonLogEmptyValue in common_log replacer
* caddyhttp: Add TLS placeholders
* caddytls: update unsupportedProtocols
Don't export unsupportedProtocols and update its godoc to mention that
it's used for logging only.
* caddyhttp: simplify getRegTLSReplacement signature
getRegTLSReplacement should receive a string instead of a pointer.
* caddyhttp: Remove http.request.tls.client.cert replacer
The previous behavior of printing the raw certificate bytes was ported
from Caddy 1, but the usefulness of that approach is suspect. Remove
the client cert replacer from v2 until a use case is presented.
* caddyhttp: Use tls.CipherSuiteName from Go 1.14
Remove ported version of CipherSuiteName in the process.
* Add handler for unhandled errors in errorChain
Currently, when an error chain is defined, the default error handler is
bypassed entirely - even if the error chain doesn't handle every error.
This results in pages returning a blank 200 OK page.
For instance, it's possible for an error chain to match on the error
status code and only handle a certain subtype of errors (like 403s). In
this case, we'd want any other errors to still go through the default
handler and return an empty page with the status code.
This PR changes the "suffix handler" passed to errorChain.Compile to
set the status code of the response to the error status code.
Fixes#3053
* Move the errorHandlerChain middleware to variable
* Style fix
* Fix crash when specifying "*" to header directive.
Fixes#3060
* Look Host header in header and header_regexp.
Also, if more than one header is provided, header_regexp now looks for
extra headers values to reflect the behavior from header.
Fixes#3059
* Fix parsing of named header_regexp in Caddyfile.
See #3059
See end of issue #3004. Loading the same certificate file multiple times
with different tags will result in it being de-duplicated in the in-
memory cache, because of course they all have the same bytes. This
meant that any certs of the same filename loaded with different tags
would be overwritten by the next certificate of the same filename, and
any conn policies looking for the tags of the previous ones would never
find them, causing connections to fail.
So, now we remember cert filenames and their tags, instead of loading
them multiple times and overwriting previous ones.
A user crafting their own JSON might make this error too... maybe we
won't see it happen. But if it does, one possibility is, when loading
a duplicate cert, instead of discarding it completely, merge the tag
list into the one that's already stored in the cache, then discard.
The documentation specifies that the hash algorithm defaults to bcrypt.
However, the implementation returns an error in provision if no hash is
provided.
Fix this inconsistency by *actually* defaulting to bcrypt.
Configuration via the Caddyfile requires use of env variables, but
an upstream issue is currently blocking that:
https://github.com/go-acme/lego/issues/1054
Providers will need to be retrofitted upstream in order to support env
var configuration.
We don't load the provider directly, because the lego provider types
aren't designed for JSON configuration and they are not implemented
as Caddy modules (there are some setup steps which a Provision call
would need to do, but they do not have Provision methods, they have
their own constructor functions that we have to wrap).
Instead of loading the challenge providers directly, the modules are
simple wrappers over the challenge providers, to facilitate the JSON
config structure and to provide a consistent experience. This also lets
us swap out the underlying challenge providers transparently if needed;
it acts as a layer of abstraction.
This is temporary as we prepare for a stable v2 release. We don't want
to make promises we don't know we can keep, and the Starlark integration
deserves much more focused attention which resources and funding do not
currently permit. When the project is financially stable, I will be able
to revisit this properly and add flexible, robust Starlark scripting
support to Caddy 2.
If user provides their own certs or makes any hostname-specific TLS
connection policy, it means that no TLS connection would be served for
any other hostnames, even though you'd expect that TLS is enabled for
them, too. So now we append a catch-all conn policy if none exist, which
allows all ClientHellos to be matched and served.
We also fix the consolidation of automation policies, which previously
gobbled up automation policies without hosts in favor of automation
policies with hosts. Instead of a host-specific policy eating up an
identical catch-all policy, the catch-all policy eats up the identical
host-specific policy, ensuring that the policy is applied to all hosts
which need it.
See also:
https://caddy.community/t/v2-automatic-https-certificate-errors/6847/9?u=matt
This ensure that if there are multiple certs that match a particular
ServerName or other parameter, then specifically the one the user
provided in the Caddyfile will be used.
This is necessary to avoid a race for sockets. Both the HTTP servers and
CertMagic solvers will try to bind the HTTP/HTTPS ports, but we need to
make sure that our HTTP servers bind first. This is kind of a new thing
now that management is async in Caddy 2.
Also update to CertMagic 0.9.2, which fixes some async use cases at
scale.
See https://caddy.community/t/caddy-server-that-returns-only-ip-address-as-text/6928/6?u=matt
In most cases, we will want to apply header operations immediately,
rather than waiting until the response is written. The exceptions are
generally going to be if we are deleting a header field or if a field is
to be overwritten. We now automatically defer header ops if deleting a
header field, and allow the user to manually enable deferred mode with
the defer subdirective.
Paths always begin with a slash, and omitting the leading slash could be
convenient to avoid confusion with a path matcher in the Caddyfile. I do
not think there would be any harm to implicitly add the leading slash.